GLM 4.5V
by Z.ai
GLM-4.5V is a vision-language foundation model for multimodal agent applications. Built on a Mixture-of-Experts (MoE) architecture with 106B parameters and 12B activated parameters, it achieves state-of-the-art results in video understanding, image Q&A, OCR, and document parsing, with strong gains in front-end web coding, grounding, and spatial reasoning. It offers a hybrid inference mode: a "thinking mode" for deep reasoning and a "non-thinking mode" for fast responses. Reasoning behavior can be toggled via the `reasoning` `enabled` boolean.
Specifications
Technical details and pricing.
Benchmarks
10 benchmark scores from Artificial Analysis.
Composite Indices
Intelligence, Coding, Math
Standard Benchmarks
Academic and industry benchmarks
Frequently Asked Questions
What is GLM 4.5V good for?
Use GLM 4.5V for everyday tasks like writing, summarizing, brainstorming, and getting clear explanations.
How much does GLM 4.5V cost?
Pricing is based on usage. Current rates are $0.60/1M tokens for input and $1.80/1M tokens for output.
Can I try GLM 4.5V for free?
Yes. You can start a chat instantly and test the model before deciding on a plan.
Does GLM 4.5V support images or audio?
GLM 4.5V can understand images.
Similar Models
Other models you might want to explore.
Benchmarks and pricing are sourced from Artificial Analysis where available. OpenRouter specs are used as a fallback.