Gemini 2.5 Pro (I/O Edition) vs Z.ai: GLM 5
Compare Gemini 2.5 Pro (I/O Edition) by Google AI against Z.ai: GLM 5 by Zhipu AI, tested across 34 shared challenges. Updated February 2026.
Compare Gemini 2.5 Pro (I/O Edition) by Google AI against Z.ai: GLM 5 by Zhipu AI, tested across 34 shared challenges. Updated February 2026.
34 challenges
Tests an AI's ability to make educated estimates based on technical knowledge
The total number of FLOPs required to train GPT‑3 is approximately 3.15 × 10²³.
Step‑by‑step reasoning:
Model size: GPT‑3 has 175 billion (1.75 × 10¹¹) trainable parameters.
Training data: According to the paper abstract, the model was trained on 300 billion (3 × 10¹¹) tokens.
FLOPs per token for training a transformer:
Calculation:
Total FLOPs ≈ 6 × (1.75 × 10¹¹) × (3 × 10¹¹)
= 6 × 5.25 × 10²²
= 3.15 × 10²³.
Consistency check: The GPT‑3 paper reports a training compute of 3,640 PF‑days. Converting:
1 PF‑day = 10¹⁵ FLOP/s × 86400 s = 8.64 × 10¹⁹ FLOPs.
3,640 PF‑days = 3,640 × 8.64 × 10¹⁹ ≈ 3.14 × 10²³ FLOPs, matching the estimate.
Thus, the total FLOPs required to train GPT‑3 is about 3.15 × 10²³.
Tests an AI's ability to solve a simple but potentially confusing logic puzzle
Tests an AI's randomness and creativity
Tests an AI's ability to generate vector graphics
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's humor and creative writing ability
Tests an AI's ability to simulate personalities and predict future trends
Tests an AI's humor and understanding of current events
Tests an AI's ability to generate a complete, working landing page
Recreate an interactive, nostalgic Pokémon battle UI in a single HTML file.
Recreate an interactive, classic Mario level in a single HTML file.
Tests an AI's ability to replicate an existing UI with Tailwind CSS