Skip to main content
Mistral AI provides a range of models, including dedicated code-specialized variants (Codestral, Devstral) that are particularly good at code generation and understanding.

Setup

Get your API key from Mistral’s platform.
export MISTRAL_API_KEY=...

Config

{
  "providers": {
    "mistral": "${MISTRAL_API_KEY}"
  }
}

Use it

{
  "agents": [
    { "name": "coder", "model": "mistral:codestral-latest" }
  ]
}
Polpo auto-infers mistral from mistral-, codestral-, and devstral- prefixes.

Models

ModelBest forContextReasoningVision
mistral-large-latestFlagship general-purpose (MoE, 675B total)256KNoYes
mistral-medium-latestMedium tier, good balance128KNoYes
mistral-small-latestSmall, fast and cheap128KNoYes
codestral-latestCode generation and completion256KNoNo
magistral-medium-latestReasoning model (chain-of-thought)128KYesNo
magistral-small-latestSmaller reasoning model128KYesNo
devstral-latestDeveloper-focused code assistant128KNoNo
Also available: pixtral-large-latest (vision), pixtral-12b (vision), ministral-8b-latest, ministral-3b-latest

Features

FeatureSupported
StreamingYes
Tool useYes
Vision (images)Yes (Large, Medium, Small, Pixtral)
ReasoningYes (Magistral)

Provider Details

Provider IDmistral
Env variableMISTRAL_API_KEY
API typeMistral API
Auto-infer prefixesmistral-, codestral-, devstral-

Notes

  • Codestral has a 256K token context window — one of the largest among code-specialized models.
  • Magistral models bring chain-of-thought reasoning to Mistral’s lineup — good for complex logic tasks.
  • Mistral Large is a 675B-parameter MoE model (41B active) with vision support.
  • Devstral is a good budget option for code tasks where you don’t need the full power of Codestral.
  • Mistral models tend to be fast and cost-effective compared to similarly-sized models from other providers.