Claude 3.7 Thinking Sonnet vs Claude Opus 4.6
Compare Claude 3.7 Thinking Sonnet and Claude Opus 4.6, both from Anthropic, context windows of 200K vs 1.0M, tested across 46 shared challenges. Updated February 2026.
Compare Claude 3.7 Thinking Sonnet and Claude Opus 4.6, both from Anthropic, context windows of 200K vs 1.0M, tested across 46 shared challenges. Updated February 2026.
41 challenges
Tests an AI's ability to make educated estimates based on technical knowledge
Let me recall the key specs of GPT-3:
There's a well-known approximation for Transformer training FLOPs:
$$C ≈ 6 \times N \times D$$
where:
A Transformer is dominated by matrix multiplications. For a matrix multiplication of shape (m × k) times (k × n), the FLOPs are ~2mkn (k multiplies and k-1 adds per output element ≈ 2k ops × mn outputs). When you sum across all the weight matrices in the model, the total forward-pass FLOPs work out to approximately 2 × N per token, where N is the total parameter count.
$$C ≈ 6 \times N \times D$$
$$C ≈ 6 \times (1.75 \times 10^{11}) \times (3 \times 10^{11})$$
$$C ≈ 6 \times 5.25 \times 10^{22}$$
$$C ≈ 3.15 \times 10^{23} \text{ FLOPs}$$
The OpenAI paper actually reported 3.14 × 10²³ FLOPs, so this estimate is essentially exact — which makes sense because the 6ND formula is precisely the approximation the community (and OpenAI themselves) use.
$$\boxed{C \approx 3.15 \times 10^{23} \text{ FLOPs}}$$
This is about 315 ZettaFLOPs — roughly equivalent to running thousands of V100 GPUs for weeks-to-months, which aligns with what we know about the training infrastructure used.
Tests an AI's ability to understand game rules and strategy
Tests an AI's ability to write in distinct character voices
Recreate an interactive, nostalgic Pokémon battle UI in a single HTML file.