OpenAI (Deprecated)
OpenAI Pack.#
This Integration is part of theDeprecated
Use OpenAI GPT
instead.
The OpenAI API can be applied to virtually any task that involves understanding or generating natural language or code. We offer a spectrum of models with different levels of power suitable for different tasks, as well as the ability to fine-tune your own custom models. These models can be used for everything from content generation to semantic search and classification. This integration was integrated and tested with version 1 of OpenAI
#
Configure OpenAI on Cortex XSOARNavigate to Settings > Integrations > Servers & Services.
Search for OpenAI.
Click Add instance to create and configure a new integration instance.
Parameter Required OpenAI API URL(e.g. https://api.openai.com/) True API Key True Trust any certificate (not secure) False Use system proxy settings False Click Test to validate the URLs, token, and connection.
#
CommandsYou can execute these commands from the Cortex XSOAR CLI, as part of an automation, or in a playbook. After you successfully execute a command, a DBot message appears in the War Room with the command details.
#
openai-completionsEnter an instruction and watch the API respond with a completion that attempts to match the context or pattern you provided.
#
Base Commandopenai-completions
#
InputArgument Name | Description | Required |
---|---|---|
prompt | Instruction. | Required |
model | The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code. Possible values are: text-davinci-003, text-curie-001, text-babbage-001, text-ada-001, code-davinci-002, code-cushman-001. Default is text-davinci-003. | Optional |
temperature | Controls randomness: Lowering results in less random completions. Default is 0.7. | Optional |
max_tokens | The maximum number of token to generate. Default is 256. | Optional |
top_p | Controls Diversity via nucleus sampling: 0.5 means half of all likihood-weighted options are considered. Default is 1. | Optional |
frequency_penalty | How much to penalize new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim. Default is 0. | Optional |
presence_penalty | How much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics. Default is 0. | Optional |
#
Context OutputPath | Type | Description |
---|---|---|
OpenAI.Completions.id | String | Id of the returned completion. |
OpenAI.Completions.model | String | The model which will generate the completion. |
OpenAI.Completions.text | String | Completed text generated by OpenAI? |
#
Command example!openai-completions prompt="Give me some characteristics of a phishing email" model="text-davinci-003" temperature="0.7" max_tokens="256" top_p="1" frequency_penalty="0" presence_penalty="0"