Mistral AI Models
European AI lab building efficient open-weight models. Known for Mixtral MoE architecture and multilingual strength.
Mistral Nemo
Mistral
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA.
Mistral Small 3.2 24B
Mistral
Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling.
Ministral 3 8B 2512
Mistral
A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities.
Mistral Small 3
Mistral
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks.
Mistral Small Creative
Mistral
Mistral Small Creative is an experimental small model designed for creative writing, narrative generation, roleplay and character-driven dialogue, general-purpose instruction following, and conversational agents.
Ministral 3 14B 2512
Mistral
The largest model in the Ministral 3 family, Ministral 3 14B offers frontier capabilities and performance comparable to its larger Mistral Small 3.2 24B counterpart.
Mistral Medium 3.1
Mistral
Mistral Medium 3.1 is an updated version of Mistral Medium 3, which is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost.
Mistral Large 3 2512
Mistral
Mistral Large 3 2512 is Mistral’s most capable model to date, featuring a sparse mixture-of-experts architecture with 41B active parameters (675B total), and released under the Apache 2.0 license.
Devstral 2 2512
Mistral
Devstral 2 is a state-of-the-art open-source model by Mistral AI specializing in agentic coding.
Codestral 2508
Mistral
Mistral's cutting-edge language model for coding released end of July 2025.
Ministral 3 3B 2512
Mistral
The smallest model in the Ministral 3 family, Ministral 3 3B is a powerful, efficient tiny language model with vision capabilities.
Mistral Small 3.1 24B
Mistral
Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities.
Devstral Small 1.1
Mistral
Devstral Small 1.1 is a 24B parameter open-weight language model for software engineering agents, developed by Mistral AI in collaboration with All Hands AI.
Mixtral 8x7B Instruct
Mistral
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use.
Mistral 7B Instruct
Mistral
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. *Mistral 7B Instruct has multiple version...
Mistral Medium 3
Mistral
Mistral Medium 3 is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost.
Mistral Large 2411
mistralai
Mistral Large 2 2411 is an update of [Mistral Large 2](/mistralai/mistral-large) released together with [Pixtral Large 2411](/mistralai/pixtral-large-2411) It provides a significant upgrade on the previous [Mistral Large 24.07](/mistralai/mistral-large-2407), with notable improvements in long context understanding, a new system prompt, and more accurate function calling.
Mistral Large
mistralai
This is Mistral AI's flagship model, Mistral Large 2 (version `mistral-large-2407`).
Mistral 7B Instruct v0.3
Mistral
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.
Mixtral 8x22B Instruct
Mistral
Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b).
Devstral Medium
Mistral
Devstral Medium is a high-performance code generation and agentic reasoning model developed jointly by Mistral AI and All Hands AI.
Voxtral Small 24B 2507
Mistral
Voxtral Small is an enhancement of Mistral Small 3, incorporating state-of-the-art audio input capabilities while retaining best-in-class text performance.
Pixtral Large 2411
Mistral
Pixtral Large is a 124B parameter, open-weight, multimodal model built on top of [Mistral Large 2](/mistralai/mistral-large-2411).
Mistral Large 2407
mistralai
This is Mistral AI's flagship model, Mistral Large 2 (version mistral-large-2407).
Saba
Mistral
Mistral Saba is a 24B-parameter language model specifically designed for the Middle East and South Asia, delivering accurate and contextually relevant responses while maintaining efficient performance.
Mistral 7B Instruct v0.2
Mistral
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length.
Mistral 7B Instruct v0.1
Mistral
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.