Groundbreaking Vision-Language Models Revolutionize Multimodal Reasoning
Groundbreaking open-source vision-language models GLM-4.5V and GLM-4.1V-Thinking revolutionize multimodal reasoning, achieving state-of-the-art performance on 42 benchmarks and outperforming larger models through Thinking Mode and Chain-of-Thought reasoning.