Saltar al contenido principal

Mistral

Mistral AI provides a range of models from compact and efficient to large and powerful, with particular strength in multilingual tasks, code generation, and vision. The Codestral and Devstral families are purpose-built for software development.

Getting an API Key

  1. Visit console.mistral.ai/api-keys
  2. Sign in or create an account
  3. Generate a new API key
  4. Paste the key into AI Supreme Council under Settings > AI Model > Mistral

API keys are stored locally in your browser (localStorage) and are never included in shared bot URLs.

Supported Models

General Purpose

ModelContext WindowMax OutputInput PriceOutput PriceCapabilities
Mistral Large 3256K8K$0.50/MTok$1.50/MTokVision, tools, code
Mistral Medium 3131K8K$0.40/MTok$2.00/MTokVision, tools, code
Mistral Small128K8K$0.10/MTok$0.30/MTokVision, tools
Mistral Saba32K8K$0.20/MTok$0.60/MTokTools

Code-Focused

ModelContext WindowMax OutputInput PriceOutput PriceCapabilities
Codestral256K8K$0.30/MTok$0.90/MTokTools, code
Devstral 2256K8K$0.40/MTok$2.00/MTokTools, code
Devstral Small128K8K$0.10/MTok$0.30/MTokTools, code

Vision

ModelContext WindowMax OutputInput PriceOutput PriceCapabilities
Pixtral Large128K8K$2.00/MTok$6.00/MTokVision, tools

Prices are per million tokens (MTok).

Choosing a Model
  • Mistral Large 3 is the flagship -- strong general performance with vision and 256K context.
  • Codestral is the go-to for code generation tasks with a large 256K context window.
  • Devstral 2 is the latest code-focused model, optimized for software engineering workflows.
  • Mistral Small is the budget option -- very affordable at $0.10/$0.30 per MTok with vision support.
  • Pixtral Large is the premium vision model for image-heavy tasks.

Vision Support

Mistral Large 3, Mistral Medium 3, Mistral Small, and Pixtral Large all support vision input. You can:

  • Paste images directly into the chat input (Ctrl+V / Cmd+V)
  • Upload images using the attachment button
  • Drag and drop images into the chat area

Pixtral Large is specifically optimized for vision tasks and provides the highest quality image understanding in the Mistral lineup.

Tool Calling

Most Mistral models support function/tool calling via the OpenAI-compatible format. Define tools in your bot configuration, and Mistral models will invoke them as structured function calls.

Code Generation

Mistral offers three code-specialized model families:

FamilyBest For
CodestralGeneral code generation, completion, and explanation
DevstralSoftware engineering workflows, agentic coding tasks
Devstral SmallLightweight code tasks on a budget

All code models support a 256K or 128K context window, making them suitable for analyzing entire codebases.

Multilingual Strength

Mistral models are trained with a strong emphasis on multilingual capabilities. They perform well in French, German, Spanish, Italian, and many other European languages, making them a good choice if you need non-English language support.

OpenAI-Compatible API

Mistral uses a fully OpenAI-compatible API:

  • Standard POST /v1/chat/completions endpoint at api.mistral.ai
  • Bearer token authentication
  • SSE streaming
  • Tool/function calling

No special configuration is needed.

Configuration

When creating a bot profile, select Mistral as the provider and choose your preferred model. You can set a per-bot API key in the bot configuration panel to override the global key.

The Mistral provider uses the Chat Completions API at api.mistral.ai/v1/chat/completions.

Tips for Best Results

  • Use Codestral or Devstral 2 for programming. These models are specifically trained for code and outperform general-purpose models on software engineering tasks.
  • Use Mistral Small for cost efficiency. At $0.10/$0.30 per MTok with vision and tool support, it is one of the cheapest capable models available.
  • Use Pixtral Large for image analysis. If your workflow is image-heavy, Pixtral is Mistral's strongest vision model.
  • Leverage the large context windows. Mistral Large 3 and Codestral both support 256K tokens -- enough for extensive codebases and long documents.