Mistral Models
Mistral logo

Mixtral 8x22B Instruct

8B

by Mistral

Mistral's official instruct fine-tuned version of [Mixtral 8x22B](/models/mistralai/mixtral-8x22b). It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding, and reasoning - large context length (64k) - fluency in English, French, Italian, German, and Spanish See benchmarks on the launch announcement [here](https://mistral.ai/news/mixtral-8x22b/). #moe

Chat with Mixtral 8x22B Instruct
Input Price$0.00/1M tokens
Output Price$0.00/1M tokens
Intelligence9.8
CodingN/A

Specifications

Technical details and pricing.

ProviderMistral
Context Window65,536 tokens
Release DateApr 17, 2024
ModalitiesText

Benchmarks

7 benchmark scores from Artificial Analysis.

GPQA33.2%
MMLU Pro53.7%
HLE4.1%
LiveCodeBench14.8%
MATH 50054.5%
AIME0.0%
SciCode18.8%

Composite Indices

Intelligence, Coding, Math

Standard Benchmarks

Academic and industry benchmarks

Frequently Asked Questions

What is Mixtral 8x22B Instruct good for?

Use Mixtral 8x22B Instruct for everyday tasks like writing, summarizing, brainstorming, and getting clear explanations.

How much does Mixtral 8x22B Instruct cost?

Pricing is based on usage. Current rates are $0.00/1M tokens for input and $0.00/1M tokens for output.

Can I try Mixtral 8x22B Instruct for free?

Yes. You can start a chat instantly and test the model before deciding on a plan.

Does Mixtral 8x22B Instruct support images or audio?

Mixtral 8x22B Instruct focuses on text-based tasks.

Benchmarks and pricing are sourced from Artificial Analysis where available. OpenRouter specs are used as a fallback.