All Models

Mistral AI Models

European AI lab building efficient open-weight models. Known for Mixtral MoE architecture and multilingual strength.

Founded 2023Paris, France27 Models Website →
Mistral logo

Mistral Nemo

Mistral

A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA.

Context131K
Speed96 tok/s
InputText
OutputText
ReasoningNo
Mistral logo

Mistral Small 3.2 24B

Mistral

Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling.

Context131K
Speed135 tok/s
InputImage, Text
OutputText
ReasoningNo
Mistral logo

Ministral 3 8B 2512

Mistral

A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities.

Context262K
Speed169 tok/s
InputText, Image
OutputText
ReasoningNo
Mistral logo

Mistral Small 3

Mistral

Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks.

Context33K
Speed111 tok/s
InputText
OutputText
ReasoningNo
Mistral logo

Mistral Small Creative

Mistral

Mistral Small Creative is an experimental small model designed for creative writing, narrative generation, roleplay and character-driven dialogue, general-purpose instruction following, and conversational agents.

Context33K
SpeedN/A
InputText
OutputText
ReasoningNo
Mistral logo

Ministral 3 14B 2512

Mistral

The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart.

Context262K
Speed124 tok/s
InputText, Image
OutputText
ReasoningNo
Mistral logo

Mistral Medium 3.1

Mistral

Mistral Medium 3.1 is an updated version of Mistral Medium 3, which is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost.

Context131K
Speed105 tok/s
InputText, Image
OutputText
ReasoningNo
Mistral logo

Mistral Large 3 2512

Mistral

Mistral Large 3 2512 is Mistral’s most capable model to date, featuring a sparse mixture-of-experts architecture with 41B active parameters (675B total), and released under the Apache 2.0 license.

Context262K
Speed38 tok/s
InputText, Image
OutputText
ReasoningNo
Mistral logo

Devstral 2 2512

Mistral

Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic coding.

Context262K
SpeedN/A
InputText
OutputText
ReasoningNo
Mistral logo

Codestral 2508

Mistral

Mistral's cutting-edge language model for coding released end of July 2025.

Context256K
Speed75 tok/s
InputText
OutputText
ReasoningNo
Mistral logo

Ministral 3 3B 2512

Mistral

The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny language model with vision capabilities.

Context131K
Speed293 tok/s
InputText, Image
OutputText
ReasoningNo
Mistral logo

Mistral Small 3.1 24B

Mistral

Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities.

Context128K
Speed107 tok/s
InputText, Image
OutputText
ReasoningNo
Mistral logo

Devstral Small 1.1

Mistral

Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI.

Context131K
Speed202 tok/s
InputText
OutputText
ReasoningNo
Mistral logo

Mixtral 8x7B Instruct

Mistral

Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use.

Context33K
SpeedN/A
InputText
OutputText
ReasoningNo
Mistral logo

Mistral 7B Instruct

Mistral

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. *Mistral 7B Instruct has multiple version...

Context33K
Speed153 tok/s
InputText
OutputText
ReasoningNo
Mistral logo

Mistral Medium 3

Mistral

Mistral Medium 3 is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost.

Context131K
Speed59 tok/s
InputText, Image
OutputText
ReasoningNo
mistralai logo

Mistral Large 2411

mistralai

Mistral Large 2 2411 is an update of [Mistral Large 2](/mistralai/mistral-large) released together with [Pixtral Large 2411](/mistralai/pixtral-large-2411) It provides a significant upgrade on the previous [Mistral Large 24.07](/mistralai/mistral-large-2407), with notable improvements in long context understanding, a new system prompt, and more accurate function calling.

Context131K
SpeedN/A
InputText
OutputText
ReasoningNo
mistralai logo

Mistral Large

mistralai

This is Mistral AI's flagship model, Mistral Large 2 (version `mistral-large-2407`).

Context128K
Speed57 tok/s
InputText
OutputText
ReasoningNo
Mistral logo

Mistral 7B Instruct v0.3

Mistral

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.

Context33K
SpeedN/A
InputText
OutputText
ReasoningNo
Mistral logo

Mixtral 8x22B Instruct

Mistral

Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b).

Context66K
SpeedN/A
InputText
OutputText
ReasoningNo
Mistral logo

Devstral Medium

Mistral

Devstral Medium is a high-performance code generation and agentic reasoning model developed jointly by Mistral AI and All Hands AI.

Context131K
Speed112 tok/s
InputText
OutputText
ReasoningNo
Mistral logo

Voxtral Small 24B 2507

Mistral

Voxtral Small is an enhancement of Mistral Small 3, incorporating state-of-the-art audio input capabilities while retaining best-in-class text performance.

Context32K
Speed114 tok/s
InputText, Audio
OutputText
ReasoningNo
Mistral logo

Pixtral Large 2411

Mistral

Pixtral Large is a 124B parameter, open-weight, multimodal model built on top of [Mistral Large 2](/mistralai/mistral-large-2411).

Context131K
Speed49 tok/s
InputText, Image
OutputText
ReasoningNo
mistralai logo

Mistral Large 2407

mistralai

This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407).

Context131K
SpeedN/A
InputText
OutputText
ReasoningNo
Mistral logo

Saba

Mistral

Mistral Saba is a 24B-parameter language model specifically designed for the Middle East and South Asia, delivering accurate and contextually relevant responses while maintaining efficient performance.

Context33K
SpeedN/A
InputText
OutputText
ReasoningNo
Mistral logo

Mistral 7B Instruct v0.2

Mistral

A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.

Context33K
SpeedN/A
InputText
OutputText
ReasoningNo
Mistral logo

Mistral 7B Instruct v0.1

Mistral

A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.

Context3K
SpeedN/A
InputText
OutputText
ReasoningNo