Llama 4 Maverick
by Meta
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total). It supports multilingual text and image input, and produces multilingual text and code output across 12 supported languages. Optimized for vision-language tasks, Maverick is instruction-tuned for assistant-like behavior, image reasoning, and general-purpose multimodal interaction. Maverick features early fusion for native multimodality and a 1 million token context window. It was trained on a curated mixture of public, licensed, and Meta-platform data, covering ~22 trillion tokens, with a knowledge cutoff in August 2024. Released on April 5, 2025 under the Llama 4 Community License, Maverick is suited for research and commercial applications requiring advanced multimodal understanding and high model throughput.
Specifications
Technical details and pricing.
Benchmarks
12 benchmark scores from Artificial Analysis.
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Frequently Asked Questions
What is Llama 4 Maverick good for?
Use Llama 4 Maverick for everyday tasks like writing, summarizing, brainstorming, and getting clear explanations.
How much does Llama 4 Maverick cost?
Pricing is based on usage. Current rates are $0.31/1M tokens for input and $0.85/1M tokens for output.
Can I try Llama 4 Maverick for free?
Yes. You can start a chat instantly and test the model before deciding on a plan.
Does Llama 4 Maverick support images or audio?
Llama 4 Maverick can understand images.
Similar Models
Other models you might want to explore.
Benchmarks and pricing are sourced from Artificial Analysis where available. OpenRouter specs are used as a fallback.