R1 Distill Qwen 32B
32Bby DeepSeek
DeepSeek R1 Distill Qwen 32B is a distilled large language model based on [Qwen 2.5 32B](https://huggingface.co/Qwen/Qwen2.5-32B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.\n\nOther benchmark results include:\n\n- AIME 2024 pass@1: 72.6\n- MATH-500 pass@1: 94.3\n- CodeForces Rating: 1691\n\nThe model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.
Specifications
Technical details and pricing.
Benchmarks
10 benchmark scores from Artificial Analysis.
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Frequently Asked Questions
What is R1 Distill Qwen 32B good for?
Use R1 Distill Qwen 32B for everyday tasks like writing, summarizing, brainstorming, and getting clear explanations.
How much does R1 Distill Qwen 32B cost?
Pricing is based on usage. Current rates are $0.27/1M tokens for input and $0.27/1M tokens for output.
Can I try R1 Distill Qwen 32B for free?
Yes. You can start a chat instantly and test the model before deciding on a plan.
Does R1 Distill Qwen 32B support images or audio?
R1 Distill Qwen 32B focuses on text-based tasks.
Similar Models
Other models you might want to explore.
Benchmarks and pricing are sourced from Artificial Analysis where available. OpenRouter specs are used as a fallback.