OpenAI GPT
OpenAI Pack.#
This Integration is part of theSupported versions
Supported Cortex XSOAR versions: 6.0.0 and later.
#
OpenAI GPT#
Instance Configuration#
Generate an API Key- Sign-up or login to OpenAI developer platform.
- Generate a new API Key at OpenAI developer platform - api-keys.
#
Choose a GPT model to interact withThe integration utilizes the 'Chat Completions' endpoint merely. Therefore, it will only be possible to configure models that support this endpoint (https://api.openai.com/v1/chat/completions).
For tasks requiring deep understanding and extensive inputs, opt for more advanced models (e.g. gpt-4). These models offer a larger context window, allowing them to process bigger documents, and provide more refined and comprehensive responses. The more elementary models (e.g. gpt-3.5) often provide shallower answers and input analysis. Refer to Models overview for more information.
#
Text generation setting (Optional)- max-tokens: The maximum number of tokens that can be generated for the response. (Allows controlling tokens' consumption). Default: unset.
- temperature: Sets the randomness in responses. Lower values (closer to 0) produce more deterministic and consistent outputs, while higher values (up to 2) increase randomness and variety. It is generally recommended altering this or top_p but not both. Default: 1.
- top_p: Enables nucleus sampling where only the top 'p' percent of probable tokens are considered. Lower values (closer to 0) result in more focused outputs, while higher values (closer to 1) increase diversity. It is generally recommended altering this or temperature but not both. Default: unset.
#
Click 'Test'
#
CommandsYou can execute these commands from the Cortex XSOAR CLI, as part of an automation, or in a playbook.
#
gpt-send-messageSend a message as a prompt to the GPT model.
!gpt-send-message message="<MESSAGE_TEXT>"
#
InputArgument Name | Description | Required |
---|---|---|
message | The message to send to the GPT model wrapped with quotes. | Yes |
reset_conversation_history | Whether to reset conversation history or keep it as context for the sent message. (Conversation history is not reset by default). | No |
max_tokens | The maximum number of tokens that can be generated for the response. Overrides text generation setting for the specific message sent. | No |
temperature | Sets the randomness in responses. Overrides text generation setting for the specific message sent. | No |
top_p | Enables nucleus sampling where only the top 'p' percent of probable tokens are considered. Overrides text generation setting for the specific message sent. | No |
#
gpt-check-email-bodyCheck email body for possible security issues.
!gpt-check-email-body entryId="<ENTRY_ID_OF_UPLOADED_EML_FILE>"
#
InputArgument Name | Description | Required |
---|---|---|
entryId | Entry ID of an uploaded .eml file from the context window. | Yes |
additionalInstructions | Provide additional instructions for the GPT model when analyzing the email body. | No |
max_tokens | The maximum number of tokens that can be generated for the response. Overrides text generation setting for the specific message sent. | No |
temperature | Sets the randomness in responses. Overrides text generation setting for the specific message sent. | No |
top_p | Enables nucleus sampling where only the top 'p' percent of probable tokens are considered. Overrides text generation setting for the specific message sent. | No |
#
gpt-check-email-headerCheck email body for possible security issues.
!gpt-check-email-header entryId="<ENTRY_ID_OF_UPLOADED_EML_FILE>"
#
InputArgument Name | Description | Required |
---|---|---|
entryId | Entry ID of an uploaded .eml file from context window. | Yes |
additionalInstructions | Provide additional instructions for the GPT model when analyzing the email headers. | No |
max_tokens | The maximum number of tokens that can be generated for the response. Overrides text generation setting for the specific message sent. | No |
temperature | Sets the randomness in responses. Overrides text generation setting for the specific message sent. | No |
top_p | Enables nucleus sampling where only the top 'p' percent of probable tokens are considered. Overrides text generation setting for the specific message sent. | No |
#
gpt-create-soc-email-templateCreate an email template out of the conversation context to be sent from the SOC.
!gpt-create-soc-email-template
#
InputArgument Name | Description | Required |
---|---|---|
additionalInstructions | Provide additional instructions for the GPT model when analyzing the email headers. | No |
max_tokens | The maximum number of tokens that can be generated for the response. Overrides text generation setting for the specific message sent. | No |
temperature | Sets the randomness in responses. Overrides text generation setting for the specific message sent. | No |
top_p | Enables nucleus sampling where only the top 'p' percent of probable tokens are considered. Overrides text generation setting for the specific message sent. | No |