LongCat Flash Chat
by Meituan
LongCat-Flash-Chat is a large-scale Mixture-of-Experts (MoE) model with 560B total parameters, of which 18.6B–31.3B (≈27B on average) are dynamically activated per input. It introduces a shortcut-connected MoE design to reduce communication overhead and achieve high throughput while maintaining training stability through advanced scaling strategies such as hyperparameter transfer, deterministic computation, and multi-stage optimization. This release, LongCat-Flash-Chat, is a non-thinking foundation model optimized for conversational and agentic tasks. It supports long context windows up to 128K tokens and shows competitive performance across reasoning, coding, instruction following, and domain benchmarks, with particular strengths in tool use and complex multi-step interactions.
Specifications
Technical details and pricing.
Frequently Asked Questions
What is LongCat Flash Chat good for?
Use LongCat Flash Chat for everyday tasks like writing, summarizing, brainstorming, and getting clear explanations.
How much does LongCat Flash Chat cost?
Pricing is based on usage. Current rates are $0.20/1M tokens for input and $0.80/1M tokens for output.
Can I try LongCat Flash Chat for free?
Yes. You can start a chat instantly and test the model before deciding on a plan.
Does LongCat Flash Chat support images or audio?
LongCat Flash Chat focuses on text-based tasks.
Similar Models
Other models you might want to explore.
Pricing, context, and capability data are sourced from OpenRouter.