Coder Large
by Arcee AI
Coder‑Large is a 32 B‑parameter offspring of Qwen 2.5‑Instruct that has been further trained on permissively‑licensed GitHub, CodeSearchNet and synthetic bug‑fix corpora. It supports a 32k context window, enabling multi‑file refactoring or long diff review in a single call, and understands 30‑plus programming languages with special attention to TypeScript, Go and Terraform. Internal benchmarks show 5–8 pt gains over CodeLlama‑34 B‑Python on HumanEval and competitive BugFix scores thanks to a reinforcement pass that rewards compilable output. The model emits structured explanations alongside code blocks by default, making it suitable for educational tooling as well as production copilot scenarios. Cost‑wise, Together AI prices it well below proprietary incumbents, so teams can scale interactive coding without runaway spend.
Specifications
Technical details and pricing.
Frequently Asked Questions
What is Coder Large good for?
Use Coder Large for everyday tasks like writing, summarizing, brainstorming, and getting clear explanations.
How much does Coder Large cost?
Pricing is based on usage. Current rates are $0.50/1M tokens for input and $0.80/1M tokens for output.
Can I try Coder Large for free?
Yes. You can start a chat instantly and test the model before deciding on a plan.
Does Coder Large support images or audio?
Coder Large focuses on text-based tasks.
Similar Models
Other models you might want to explore.
Pricing, context, and capability data are sourced from OpenRouter.