Ollama
Ollama Pack.#
This Integration is part of theSupported versions
Supported Cortex XSOAR versions: 6.0.0 and later.
Integrate with open source LLMs using Ollama. With an instance of Ollama running locally you can use this integration to have a conversation in an Incident, download models, and create new models.
#
Configure Ollama on Cortex XSOARNavigate to Settings > Integrations > Servers & Services.
Search for Ollama.
Click Add instance to create and configure a new integration instance.
Parameter Description Required Protocol HTTP or HTTPS False Server hostname or IP Enter the Ollama IP or hostname True Port The port Ollama is running on True Path By default Ollama's API path is /api, but you may be running it behind a proxy with a different path. True Trust any certificate (not secure) Trust any certificate (not secure) False Use system proxy settings Use system proxy settings False Cloudflare Access Client Id If Ollama is running behind CLoudflare ZeroTrust, provide the Service Access ID here. False Cloudflare Access Client Secret If Ollama is running behind CLoudflare ZeroTrust, provide the Service Access Secret here. False Default Model Some commands allow you to specify a model. If no model is provided, this value will be used. False Click Test to validate the URLs, token, and connection.
#
CommandsYou can execute these commands from the Cortex XSOAR CLI, as part of an automation, or in a playbook. After you successfully execute a command, a DBot message appears in the War Room with the command details.
#
ollama-list-modelsGet a list of all available models
#
Base Commandollama-list-models
#
InputArgument Name | Description | Required |
---|
#
Context OutputPath | Type | Description |
---|---|---|
ollama.models | unknown | Output of the command |
#
ollama-model-pullPull a model
#
Base Commandollama-model-pull
#
InputArgument Name | Description | Required |
---|---|---|
model | Name of model to pull. See https://ollama.com/library for a list of options. | Optional |
#
Context OutputPath | Type | Description |
---|---|---|
ollama.pull | unknown | Output of the command |
#
ollama-model-deleteDelete a model
#
Base Commandollama-model-delete
#
InputArgument Name | Description | Required |
---|---|---|
model | The name of the model to delete. | Optional |
#
Context OutputPath | Type | Description |
---|---|---|
ollama.delete | unknown | Output of the command |
#
ollama-conversationGeneral chat command that tracks the conversation history in the Incident.
#
Base Commandollama-conversation
#
InputArgument Name | Description | Required |
---|---|---|
model | The model name. | Optional |
message | The message to be sent. | Required |
#
Context OutputPath | Type | Description |
---|---|---|
ollama.history | unknown | Output of the command |
#
ollama-model-infoShow information for a specific model.
#
Base Commandollama-model-info
#
InputArgument Name | Description | Required |
---|---|---|
model | name of the model to show. | Optional |
#
Context OutputPath | Type | Description |
---|---|---|
ollama.show | unknown | Output of the command |
#
ollama-model-createCreate a new model from a Modelfile.
#
Base Commandollama-model-create
#
InputArgument Name | Description | Required |
---|---|---|
model | name of the model to create. | Required |
model_file | contents of the Modelfile. | Required |
#
Context OutputPath | Type | Description |
---|---|---|
ollama.create | unknown | Output of the command |
#
ollama-generateGenerate a response for a given prompt with a provided model. Conversation history IS NOT tracked.
#
Base Commandollama-generate
#
InputArgument Name | Description | Required |
---|---|---|
model | The model name. | Optional |
message | The message to be sent. | Optional |
#
Context OutputPath | Type | Description |
---|---|---|
ollama.generate | unknown | Output of the command |