Mixtral 8x22B Instruct
8Bby Mistral
Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding, and reasoning - large context length (64k) - fluency in English, French, Italian, German, and Spanish See benchmarks on the launch announcement [here](https://mistral.ai/news/mixtral-8x22b/). #moe
Specifications
Technical details and pricing.
Benchmarks
7 benchmark scores from Artificial Analysis.
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Frequently Asked Questions
What is Mixtral 8x22B Instruct good for?
Use Mixtral 8x22B Instruct for everyday tasks like writing, summarizing, brainstorming, and getting clear explanations.
How much does Mixtral 8x22B Instruct cost?
Pricing is based on usage. Current rates are $0.00/1M tokens for input and $0.00/1M tokens for output.
Can I try Mixtral 8x22B Instruct for free?
Yes. You can start a chat instantly and test the model before deciding on a plan.
Does Mixtral 8x22B Instruct support images or audio?
Mixtral 8x22B Instruct focuses on text-based tasks.
Similar Models
Other models you might want to explore.
Benchmarks and pricing are sourced from Artificial Analysis where available. OpenRouter specs are used as a fallback.