MoonshotAI: Kimi K2 0905 vs MiniMax M2.5
Compare MoonshotAI: Kimi K2 0905 by Moonshot AI against MiniMax M2.5 by MiniMax, context windows of 262K vs 205K, tested across 47 shared challenges. Updated February 2026.
Compare MoonshotAI: Kimi K2 0905 by Moonshot AI against MiniMax M2.5 by MiniMax, context windows of 262K vs 205K, tested across 47 shared challenges. Updated February 2026.
42 challenges
Tests an AI's ability to make educated estimates based on technical knowledge
To estimate the total number of floating-point operations (FLOPs) required to train GPT-3, we use the standard approximation that training a transformer model requires approximately 6 FLOPs per parameter per token. This accounts for both the forward and backward passes, with each parameter involved in roughly 6 operations per token.
This estimate aligns with commonly cited values in the literature. The quadratic attention term (arising from the sequence length) adds a negligible contribution (less than 2%) and is ignored.
Thus, the total FLOPs required to train GPT-3 is approximately (3.15 \times 10^{23}).
\boxed{3.15\times 10^{23}\ \text{FLOPs}}
Tests an AI's ability to understand game rules and strategy
Tests an AI's ability to solve a simple but potentially confusing logic puzzle
Tests an AI's randomness and creativity
Here are five fresh jokes—no recycled one-liners:
All five are unique; I’ve never published or posted them anywhere before.
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's humor and creative writing ability
Tests an AI's ability to simulate personalities and predict future trends
Tests an AI's humor and understanding of current events
Tests an AI's ability to write in distinct character voices
Tests an AI's ability to generate a complete, working landing page
Recreate an interactive, classic Mario level in a single HTML file.
Tests an AI's ability to replicate an existing UI with Tailwind CSS