Claude Opus 4.6 vs MoonshotAI: Kimi K2 0905
Compare Claude Opus 4.6 by Anthropic against MoonshotAI: Kimi K2 0905 by Moonshot AI, context windows of 1.0M vs 262K, tested across 47 shared challenges. Updated February 2026.
Compare Claude Opus 4.6 by Anthropic against MoonshotAI: Kimi K2 0905 by Moonshot AI, context windows of 1.0M vs 262K, tested across 47 shared challenges. Updated February 2026.
42 challenges
Tests an AI's ability to make educated estimates based on technical knowledge
Below is a “back-of-the-envelope” that stays within a factor of ~2 of the number most practitioners quote (≈3×10²³ FLOP).
We will do it twice—once with the simplest possible formula (parameter count × 6 × tokens) and once with a slightly more careful treatment of the actual tensor shapes inside the transformer. Both converge to the same ball-park.
For any dense transformer that is trained with the standard forward+backward pass, the total floating-point operations are very close to
FLOP ≈ 6 × N × D
where
N = number of non-embedding parameters
D = number of tokens processed during the whole run.
The factor “6” comes from:
Plugging in the public numbers:
N = 1.75 × 10¹¹
D = 3 × 10¹¹
FLOP = 6 × 1.75e11 × 3e11
≈ 3.15 × 10²³
That is already within the canonical range.
We can sanity-check the “6×” rule by actually counting the multiply–adds inside one forward pass and then doubling for the backward.
3.1 Model shape
3.2 Parameters per layer
Total non-embedding parameters
N_layer = 96 × 12 d_model² = 1152 d_model²
N_emb = V d_model
Plug d_model = 12288
d_model² ≈ 1.51 × 10⁸
N_layer ≈ 1152 × 1.51e8 ≈ 1.74 × 10¹¹
N_emb ≈ 5e4 × 1.22e4 ≈ 6.1 × 10⁸
So N ≈ 1.75 × 10¹¹, matching the headline number.
3.3 FLOPs per token in the forward pass
For every layer we do one matrix multiply for each of the four attention projections, one for the attention output, and two for the FFN. All of them are of the form
Y = X W (batch×seq, d) × (d, d′) → (batch×seq, d′)
with d′ either d_model or 4d_model. Counting multiply–adds as 2 FLOPs per scalar:
Add the embedding look-up (sparse, but we usually treat it as dense for an upper bound): seq V d_model × 2
Total forward FLOPs per token = 2304 d_model² + 2 V d_model
Numerically:
2304 × 1.51e8 ≈ 3.48 × 10¹¹
2 V d_model ≈ 1.2 × 10⁹ (negligible)
So ≈ 3.5 × 10¹¹ FLOP per token forward.
Backward pass is again ~2× forward (same multiplies, but both grad-w and grad-x), so total 3× forward = 1.05 × 10¹² FLOP per token.
3.4 Scale to 300 B tokens
1.05e12 × 3e11 = 3.15 × 10²³ FLOP
Both the ultra-simple “6 N D” rule and the detailed per-layer count land at
≈ 3 × 10²³ floating-point operations
for the entire GPT-3 training run.
Tests an AI's ability to understand game rules and strategy
Tests an AI's ability to solve a simple but potentially confusing logic puzzle
Tests an AI's randomness and creativity
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's humor and creative writing ability
Tests an AI's ability to simulate personalities and predict future trends
Tests an AI's humor and understanding of current events
Tests an AI's ability to write in distinct character voices
Tests an AI's ability to generate a complete, working landing page
Recreate an interactive, classic Mario level in a single HTML file.
Tests an AI's ability to replicate an existing UI with Tailwind CSS