ChatCompletion
Chat completion with AI models
Handles chat interactions using AI models (OpenAI, Ollama, Gemini, Anthropic, MistralAI, Deepseek).
type: "io.kestra.plugin.ai.completion.ChatCompletion"
Examples
Chat completion with Google Gemini
id: chat_completion
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
apiKey: "{{ kv('GOOGLE_API_KEY') }}"
modelName: gemini-2.5-flash
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{inputs.prompt}}"
Chat Completion with Google Gemini and a WebSearch tool
id: chat_completion_with_tools
namespace: company.ai
inputs:
- id: prompt
type: STRING
tasks:
- id: chat_completion_with_tools
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
apiKey: "{{ kv('GOOGLE_API_KEY') }}"
modelName: gemini-2.5-flash
messages:
- type: SYSTEM
content: You are a helpful assistant, answer concisely, avoid overly casual language or unnecessary verbosity.
- type: USER
content: "{{inputs.prompt}}"
tools:
- type: io.kestra.plugin.ai.tool.GoogleCustomWebSearch
apiKey: "{{ kv('GOOGLE_SEARCH_API_KEY') }}"
csi: "{{ kv('GOOGLE_SEARCH_CSI') }}"
Extract structured outputs with a JSON schema. Not all model providers support JSON schema; in those cases, you have to specify the schema in the prompt.
id: structured-output
namespace: company.ai
inputs:
- id: prompt
type: STRING
defaults: |
Hello, my name is John. I was born on January 1, 2000.
tasks:
- id: ai-agent
type: io.kestra.plugin.ai.completion.ChatCompletion
provider:
type: io.kestra.plugin.ai.provider.GoogleGemini
modelName: gemini-2.5-flash
apiKey: "{{ kv('GEMINI_API_KEY') }}"
configuration:
responseFormat:
type: JSON
jsonSchema:
type: object
properties:
name:
type: string
birth:
type: string
messages:
- type: USER
content: "{{inputs.prompt}}"
Properties
messages *Requiredarray
provider *RequiredNon-dynamicAmazonBedrockAnthropicAzureOpenAIDeepSeekGoogleGeminiGoogleVertexAIMistralAIOllamaOpenAI
Language Model Provider
configuration Non-dynamicChatConfiguration
{}
Chat configuration
tools Non-dynamicCodeExecutionDockerMcpClientGoogleCustomWebSearchKestraFlowKestraTaskSseMcpClientStdioMcpClientStreamableHttpMcpClientTavilyWebSearch
Tools that the LLM may use to augment its response
Outputs
finishReason string
STOP
LENGTH
TOOL_EXECUTION
CONTENT_FILTER
OTHER
Finish reason
jsonOutput object
LLM output for JSON
response format
The result of the LLM completion for response format of type JSON
, null otherwise.
outputFiles object
URIs of the generated files in Kestra's internal storage
requestDuration integer
Request duration in milliseconds
textOutput string
LLM output for TEXT
response format
The result of the LLM completion for response format of type TEXT
(default), null otherwise.
tokenUsage TokenUsage
Token usage
Definitions
io.kestra.plugin.ai.completion.ChatCompletion-ChatMessage
content string
type string
SYSTEM
AI
USER
Mistral AI Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
type *Requiredobject
baseUrl string
API base URL
Model Context Protocol (MCP) Stdio client tool
command *Requiredarray
MCP client command, as a list of command parts
type *Requiredobject
env object
Environment variables
logEvents booleanstring
false
Log events
Call a Kestra flow as a tool
type *Requiredobject
description string
Description of the flow if not already provided inside the flow itself
Use it only if you define the flow in the tool definition. The LLM needs a tool description to identify whether to call it. If the flow has a description, the tool will use it. Otherwise, the description property must be explicitly defined.
flowId string
Flow ID of the flow that should be called
inheritLabels booleanstring
false
Whether the flow should inherit labels from this execution that triggered it
By default, labels are not inherited. If you set this option to true
, the flow execution will inherit all labels from the agent's execution.
Any labels passed by the LLM will override those defined here.
inputs object
Input values that should be passed to flow's execution
Any inputs passed by the LLM will override those defined here.
labels arrayobject
Labels that should be added to the flow's execution
Any labels passed by the LLM will override those defined here.
namespace string
Namespace of the flow that should be called
revision integerstring
Revision of the flow that should be called
scheduleDate string
date-time
Schedule the flow execution at a later date
If the LLM sets a scheduleDate, it will override the one defined here.
Model Context Protocol (MCP) SSE client tool
type *Requiredobject
url *Requiredstring
URL of the MCP server
headers object
Custom headers
Useful, for example, for adding authentication tokens via the Authorization
header.
logRequests booleanstring
false
Log requests
logResponses booleanstring
false
Log responses
timeout string
duration
Connection timeout duration
Call a Kestra runnable task as a tool
io.kestra.plugin.ai.domain.AIOutput-ToolExecution
requestArguments object
requestId string
requestName string
result string
Deepseek Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
type *Requiredobject
baseUrl string
https://api.deepseek.com/v1
API base URL
io.kestra.plugin.ai.domain.AIOutput-AIResponse
completion string
Generated text completion
The result of the text completion
finishReason string
STOP
LENGTH
TOOL_EXECUTION
CONTENT_FILTER
OTHER
Finish reason
id string
Response identifier
requestDuration integer
Request duration in milliseconds
tokenUsage TokenUsage
Token usage
io.kestra.plugin.ai.domain.ChatConfiguration-ResponseFormat
jsonSchema object
JSON Schema (used when type = JSON)
Provide a JSON Schema describing the expected structure of the response. In Kestra flows, define the schema in YAML (it is still a JSON Schema object). Example (YAML):
responseFormat:
type: JSON
jsonSchema:
type: object
required: ["category", "priority"]
properties:
category:
type: string
enum: ["ACCOUNT", "BILLING", "TECHNICAL", "GENERAL"]
priority:
type: string
enum: ["LOW", "MEDIUM", "HIGH"]
Note: Provider support for strict schema enforcement varies. If unsupported, guide the model about the expected output structure via the prompt and validate downstream.
jsonSchemaDescription string
Schema description (optional)
Natural-language description of the schema to help the model produce the right fields. Example: "Classify a customer ticket into category and priority."
type string
TEXT
TEXT
JSON
Response format type
Specifies how the LLM should return output. Allowed values:
- TEXT (default): free-form natural language.
- JSON: structured output validated against a JSON Schema.
Model Context Protocol (MCP) Docker client tool
image *Requiredstring
Container image
type *Requiredobject
apiVersion string
API version
binds array
Volume binds
command array
MCP client command, as a list of command parts
dockerCertPath string
Docker certificate path
dockerConfig string
Docker configuration
dockerContext string
Docker context
dockerHost string
Docker host
dockerTlsVerify booleanstring
Whether Docker should verify TLS certificates
env object
Environment variables
logEvents booleanstring
false
Whether to log events
registryEmail string
Container registry email
registryPassword string
Container registry password
registryUrl string
Container registry URL
registryUsername string
Container registry username
Google Custom Search web tool
apiKey *Requiredstring
API key
csi *Requiredstring
Custom search engine ID (cx)
type *Requiredobject
Ollama Model Provider
endpoint *Requiredstring
Model endpoint
modelName *Requiredstring
Model name
type *Requiredobject
Code execution tool using Judge0
apiKey *Requiredstring
RapidAPI key for Judge0
You can obtain it from the RapidAPI website.
type *Requiredobject
OpenAI Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
type *Requiredobject
baseUrl string
API base URL
io.kestra.plugin.ai.domain.ChatConfiguration
logRequests booleanstring
Log LLM requests
If true, prompts and configuration sent to the LLM will be logged at INFO level.
logResponses booleanstring
Log LLM responses
If true, raw responses from the LLM will be logged at INFO level.
responseFormat ChatConfiguration-ResponseFormat
Response format
Defines the expected output format. Default is plain text.
Some providers allow requesting JSON or schema-constrained outputs, but support varies and may be incompatible with tool use.
When using a JSON schema, the output will be returned under the key jsonOutput
.
seed integerstring
Seed
Optional random seed for reproducibility. Provide a positive integer (e.g., 42, 1234). Using the same seed with identical settings produces repeatable outputs.
temperature numberstring
Temperature
Controls randomness in generation. Typical range is 0.0–1.0. Lower values (e.g., 0.2) make outputs more focused and deterministic, while higher values (e.g., 0.7–1.0) increase creativity and variability.
topK integerstring
Top-K
Limits sampling to the top K most likely tokens at each step. Typical values are between 20 and 100. Smaller values reduce randomness; larger values allow more diverse outputs.
topP numberstring
Top-P (nucleus sampling)
Selects from the smallest set of tokens whose cumulative probability is ≤ topP. Typical values are 0.8–0.95. Lower values make the output more focused, higher values increase diversity.
io.kestra.plugin.ai.domain.TokenUsage
inputTokenCount integer
outputTokenCount integer
totalTokenCount integer
io.kestra.plugin.ai.domain.AIOutput-AIResponse-ToolExecutionRequest
arguments object
Tool request arguments
id string
Tool execution request identifier
name string
Tool name
Azure OpenAI Model Provider
endpoint *Requiredstring
API endpoint
The Azure OpenAI endpoint in the format: https://{resource}.openai.azure.com/
modelName *Requiredstring
Model name
type *Requiredobject
apiKey string
API Key
clientId string
Client ID
clientSecret string
Client secret
serviceVersion string
API version
tenantId string
Tenant ID
Google VertexAI Model Provider
endpoint *Requiredstring
Endpoint URL
location *Requiredstring
Project location
modelName *Requiredstring
Model name
project *Requiredstring
Project ID
type *Requiredobject
Google Gemini Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
type *Requiredobject
Model Context Protocol (MCP) SSE client tool
sseUrl *Requiredstring
SSE URL of the MCP server
type *Requiredobject
headers object
Custom headers
Could be useful, for example, to add authentication tokens via the Authorization
header.
logRequests booleanstring
false
Log requests
logResponses booleanstring
false
Log responses
timeout string
duration
Connection timeout duration
Anthropic AI Model Provider
apiKey *Requiredstring
API Key
modelName *Requiredstring
Model name
type *Requiredobject
WebSearch tool for Tavily Search
apiKey *Requiredstring
Tavily API Key - you can obtain one from the Tavily website
type *Requiredobject
Amazon Bedrock Model Provider
accessKeyId *Requiredstring
AWS Access Key ID
modelName *Requiredstring
Model name
secretAccessKey *Requiredstring
AWS Secret Access Key
type *Requiredobject
modelType string
COHERE
COHERE
TITAN
Amazon Bedrock Embedding Model Type