Meta Models
Meta logo

Llama 3.2 11B Vision Instruct

11B

by Meta

Llama 3.2 11B Vision is a multimodal model with 11 billion parameters, designed to handle tasks combining visual and textual data. It excels in tasks such as image captioning and visual question answering, bridging the gap between language generation and visual reasoning. Pre-trained on a massive dataset of image-text pairs, it performs well in complex, high-accuracy image analysis. Its ability to integrate visual understanding with language processing makes it an ideal solution for industries requiring comprehensive visual-linguistic AI applications, such as content creation, AI-driven customer service, and research. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD_VISION.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).

Chat with Llama 3.2 11B Vision Instruct
Input Price$0.16/1M tokens
Output Price$0.16/1M tokens
Intelligence8.8
Coding4.3

Specifications

Technical details and pricing.

ProviderMeta
Context Window131,072 tokens
Release DateSep 25, 2024
ModalitiesText, Image β†’ Text
CapabilitiesVision

Benchmarks

12 benchmark scores from Artificial Analysis.

GPQA22.1%
MMLU Pro46.4%
HLE5.2%
LiveCodeBench11.0%
MATH 50051.6%
AIME 20251.7%
AIME9.3%
SciCode11.2%
LCR11.7%
IFBench30.4%
Tau214.6%
TerminalBench Hard0.8%

Composite Indices

Intelligence, Coding, Math

Standard Benchmarks

Academic and industry benchmarks

Frequently Asked Questions

What is Llama 3.2 11B Vision Instruct good for?

Use Llama 3.2 11B Vision Instruct for everyday tasks like writing, summarizing, brainstorming, and getting clear explanations.

How much does Llama 3.2 11B Vision Instruct cost?

Pricing is based on usage. Current rates are $0.16/1M tokens for input and $0.16/1M tokens for output.

Can I try Llama 3.2 11B Vision Instruct for free?

Yes. You can start a chat instantly and test the model before deciding on a plan.

Does Llama 3.2 11B Vision Instruct support images or audio?

Llama 3.2 11B Vision Instruct can understand images.

Benchmarks and pricing are sourced from Artificial Analysis where available. OpenRouter specs are used as a fallback.