Skip to content
Rival
Models
CompareBest ForArena
Lab
Sign Up
Sign Up

We compare AI models for a living. On purpose. We chose this.

@rival_tips

Explore

  • Compare Models
  • All Models
  • Prompt Lab
  • Image Generation
  • Audio Comparison
  • Leaderboard
  • Challenges

Discover

  • Insights
  • AI Creators
  • AI Tools
  • The Graveyard

Developers

  • Developer Hub
  • MCP Server
  • .llmignore
  • Badges
  • Rival Datasets

Connect

  • Methodology
  • Sponsor
  • Partnerships
  • Privacy Policy
  • Terms
  • RSS Feed
© 2026 Rival · Built at hours no one should be awake, on hardware we don’t own
Rival
Models
CompareBest ForArena
Lab
Sign Up
Sign Up

We compare AI models for a living. On purpose. We chose this.

@rival_tips

Explore

  • Compare Models
  • All Models
  • Prompt Lab
  • Image Generation
  • Audio Comparison
  • Leaderboard
  • Challenges

Discover

  • Insights
  • AI Creators
  • AI Tools
  • The Graveyard

Developers

  • Developer Hub
  • MCP Server
  • .llmignore
  • Badges
  • Rival Datasets

Connect

  • Methodology
  • Sponsor
  • Partnerships
  • Privacy Policy
  • Terms
  • RSS Feed
© 2026 Rival · Built at hours no one should be awake, on hardware we don’t own

MoonshotAI: Kimi K2 0905 — Estimate Complexity

  1. Home
  2. Models
  3. MoonshotAI: Kimi K2 0905
  4. Estimate Complexity

See how MoonshotAI: Kimi K2 0905 compares on this challenge:

  • MoonshotAI: Kimi K2 0905 vs Claude Sonnet 4.6
  • MoonshotAI: Kimi K2 0905 vs Google: Gemini 3.1 Flash Lite Preview
  • MoonshotAI: Kimi K2 0905 vs Gemini 3.1 Pro Preview

4 models have responded to the "Estimate Complexity" challenge. Compare their approaches side-by-side on Rival. This response is part of Rival's open dataset of 5,600+ AI model responses.

Rival
Models
CompareBest ForArena
Lab
Sign Up
Sign Up
Back to MoonshotAI: Kimi K2 0905

Estimate Complexity

Kimi K2 0905's response to the 'Estimate Complexity' challenge.

MoonshotAI: Kimi K2 0905
September 5, 2025
Text

?
Prompt

Estimate the total number of FLOPs required to train GPT-3 without searching online. Explain your reasoning step by step.

MoonshotAI: Kimi K2 0905
Response

kimi-k2-0905 logo
MoonshotAI: Kimi K2 0905Estimate Complexity
kimi-k2-0905 logo
MoonshotAI: Kimi K2 0905

Below is a “back-of-the-envelope” that stays within a factor of ~2 of the number most practitioners quote (≈3×10²³ FLOP).
We will do it twice—once with the simplest possible formula (parameter count × 6 × tokens) and once with a slightly more careful treatment of the actual tensor shapes inside the transformer. Both converge to the same ball-park.


  1. What we know publicly (no web look-ups)

  • OpenAI said GPT-3 has 175 B parameters.
  • They also said it was trained on “about 300 B tokens” (that figure is in the original paper).
  • The model is a dense, decoder-only transformer trained with vanilla autoregressive language-modeling (next-token prediction).
  • Training used the usual Adam optimiser with weight-decay; no fancy MoE or pipeline-parallel tricks that would change the arithmetic intensity.

  1. Quick first cut: the “6×” rule

For any dense transformer that is trained with the standard forward+backward pass, the total floating-point operations are very close to

    FLOP ≈ 6 × N × D

where N = number of non-embedding parameters
D = number of tokens processed during the whole run.

The factor “6” comes from:

  • 2 mul–adds per parameter for the forward pass (weight multiply + bias add)
  • 4 mul–adds per parameter for the backward pass (grad w.r.t. weights and activations).

Plugging in the public numbers:

    N = 1.75 × 10¹¹
    D = 3 × 10¹¹

    FLOP = 6 × 1.75e11 × 3e11
          ≈ 3.15 × 10²³

That is already within the canonical range.


  1. Second cut: walk through the transformer dimensions

We can sanity-check the “6×” rule by actually counting the multiply–adds inside one forward pass and then doubling for the backward.

3.1 Model shape

  • d_model = 12288 (12 k) ← well-known from the paper
  • n_layers = 96
  • n_heads = 96
  • Vocab = 50 k (we’ll keep it symbolic, V = 5e4)
  • Intermediate FFN multiplier = 4 → 4d_model = 49 k

3.2 Parameters per layer

  • QKV projection: 3 matrices of shape (d_model, d_model) → 3 d_model²
  • Output projection: 1 matrix (d_model, d_model) → d_model²
  • FFN: 2 matrices (d_model, 4d_model) and (4d_model, d_model) → 8 d_model²
    Total per layer = (3+1+8) d_model² = 12 d_model²
    Embedding matrix (shared input/output) = V d_model

Total non-embedding parameters
N_layer = 96 × 12 d_model² = 1152 d_model²
N_emb = V d_model
Plug d_model = 12288
d_model² ≈ 1.51 × 10⁸
N_layer ≈ 1152 × 1.51e8 ≈ 1.74 × 10¹¹
N_emb ≈ 5e4 × 1.22e4 ≈ 6.1 × 10⁸
So N ≈ 1.75 × 10¹¹, matching the headline number.

3.3 FLOPs per token in the forward pass
For every layer we do one matrix multiply for each of the four attention projections, one for the attention output, and two for the FFN. All of them are of the form

    Y = X W      (batch×seq, d) × (d, d′) → (batch×seq, d′)

with d′ either d_model or 4d_model. Counting multiply–adds as 2 FLOPs per scalar:

  • Attention QKV: 3 × (seq·d_model²) × 2
  • Attention out: 1 × (seq·d_model²) × 2
  • FFN: 2 × (seq·4d_model²) × 2
    Total per layer = 2 seq d_model² (3 + 1 + 8) = 24 seq d_model²
    Over 96 layers: 96 × 24 seq d_model² = 2304 seq d_model²

Add the embedding look-up (sparse, but we usually treat it as dense for an upper bound): seq V d_model × 2
Total forward FLOPs per token = 2304 d_model² + 2 V d_model
Numerically:

    2304 × 1.51e8 ≈ 3.48 × 10¹¹  
    2 V d_model ≈ 1.2 × 10⁹  (negligible)

So ≈ 3.5 × 10¹¹ FLOP per token forward.

Backward pass is again ~2× forward (same multiplies, but both grad-w and grad-x), so total 3× forward = 1.05 × 10¹² FLOP per token.

3.4 Scale to 300 B tokens
1.05e12 × 3e11 = 3.15 × 10²³ FLOP


  1. Conclusion

Both the ultra-simple “6 N D” rule and the detailed per-layer count land at

    ≈ 3 × 10²³ floating-point operations

for the entire GPT-3 training run.

Estimate Complexity

Turn this model response into notes, narration, or a short video

Auto NotesTranscriptNarrateVideo Cut

Partner link

This output lives in our open dataset — 5K+ responses across 200 models. Take it. It’s free.

About MoonshotAI: Kimi K2 0905

Capabilities

ConversationReasoningCode GenerationAnalysisTool Use

Categories

TextCode

Specifications

Provider
Moonshotai
Released
2025-09-04
Size
XLARGE
Context
262,144 tokens

Keep exploring

SAME PROMPT

Claude Sonnet 4.6's version

Same prompt, different result

COMPARE

MoonshotAI: Kimi K2 0905 vs Google: Gemini 3.1 Flash Lite Preview

Both outputs, side by side

We compare AI models for a living. On purpose. We chose this.

@rival_tips

Explore

  • Compare Models
  • All Models
  • Prompt Lab
  • Image Generation
  • Audio Comparison
  • Leaderboard
  • Challenges

Discover

  • Insights
  • AI Creators
  • AI Tools
  • The Graveyard

Developers

  • Developer Hub
  • MCP Server
  • .llmignore
  • Badges
  • Rival Datasets

Connect

  • Methodology
  • Sponsor
  • Partnerships
  • Privacy Policy
  • Terms
  • RSS Feed
© 2026 Rival · Built at hours no one should be awake, on hardware we don’t own