GLM-5 AI Model Launches with 744B Parameters, Tops Open-Source Rankings on Key Benchmarks
Summary
GLM-5 AI model launches with massive 744 billion parameters, claiming top performance among open-source models on key benchmarks while targeting complex engineering tasks and releasing under MIT license for widespread accessibility.
Key Points
- GLM-5 launches with 744B parameters and 28.5T training tokens, targeting complex systems engineering and long-horizon agentic tasks while achieving best-in-class performance among open-source models
- The model ranks #1 among open-source models on Vending Bench 2 with a final balance of $4,432 and significantly outperforms GLM-4.7 across reasoning, coding, and agentic benchmarks
- GLM-5 releases under MIT License on HuggingFace and ModelScope, with availability through Z.ai platform and compatibility with coding agents like Claude Code and OpenClaw