Loading share button...

Kimi Linear 48B A3B InstructvsOpenAI o3

33 challenges

Why Kimi Linear 48B A3B Instruct?

58.3x cheaper overall$0.30/M in · $0.60/M out
7 months newerNov 2025

Why OpenAI o3?

No standout advantages
3 / 33

Estimate Complexity

reasoning·hard

Tests an AI's ability to make educated estimates based on technical knowledge

Kimi Linear 48B A3B Instruct

Kimi Linear 48B A3B Instruct

Below is a back-of-the-envelope estimate of the total FLOPs needed to train GPT-3 “175 B” (the largest 96-layer, 12288-d model with 175 billion parameters) from scratch on the full 300 B-token corpus, assuming the standard transformer training setup used by OpenAI.

Step 1 – FLOPs per forward+backward pass
A single transformer block has

  • 2 matrix-multiply operations per attention layer (QKᵀ and OV)
  • 2 matrix-multiply operations per MLP (Wi and Wo)
  • 1 extra attention output projection (optional, but small)
  • 1 residual add per block

For 96 layers and 12288 hidden units, each matrix multiply is roughly
(12288²) ≈ 1.5 × 10⁸ FLOPs.
So per block: 2 attention + 2 MLP ≈ 8 × 1.5 × 10⁸ ≈ 1.2 × 10⁹ FLOPs.
Add residual adds and layer norms (~10 % overhead) ⇒ ≈ 1.3 × 10⁹ FLOPs per block.

With 96 blocks, forward+backward per token ≈ 1.25 × 10¹¹ FLOPs ≈ 125 GFLOPs.

Step 2 – Tokens processed per forward+backward step
The batch size is 3,200 sequences × 2,048 tokens = 6.55 million tokens.
Thus each SGD step consumes ≈ 6.6 × 10⁶ tokens.

Step 3 – Total SGD steps
300 billion tokens / 6.6 million tokens per step ≈ 45,500 steps.

Step 4 – Total FLOPs
45,500 steps × 125 GFLOPs/step ≈ 5.7 × 10¹² FLOPs ≈ 5.7 petaflop-days.

Step 5 – Overhead for data loading, optimizer states, and communication

  • Adam: 8 extra copies of the parameter matrix (2 × 4 bytes per float32) → +50 %
  • Gradient communication and padding → +20 %
  • Data loading and tokenization → +10 %

Overall multiply by 1.8 ⇒ ≈ 1.0 × 10¹³ FLOPs ≈ 10 petaflop-days.

Step 6 – Round to one significant figure
≈ 10¹³ FLOPs, i.e. ten petaflop-days.

So a rough estimate is on the order of 10¹³ total floating-point operations to train GPT-3 175 B from scratch on 300 B tokens with the standard transformer architecture and training setup.

OpenAI o3

OpenAI o3

Below is a simple "back-of-the-envelope" estimate that uses only a few public facts about GPT-3 and very generic rules of thumb for counting operations.

Step 1. Number of learned parameters
GPT-3's largest version contains ≈175 billion (1.75 × 10¹¹) trainable parameters.

Step 2. How much data was seen during training
OpenAI reports that the model was trained on about 300 billion tokens (3 × 10¹¹).

Step 3. How many floating-point operations does one parameter cost per token?
A practical rule of thumb for Transformer language models is
 Training FLOPs ≈ 6 × (# parameters) × (# training tokens)

Where does the factor 6 come from?

• Forward pass: every parameter is involved in one multiply-accumulate → 2 FLOPs
• Back-prop gradient computation: roughly another 2 FLOPs
• Parameter-update (optimizer) and miscellaneous overhead: ≈2 FLOPs
Total ≈6 FLOPs per parameter per token.

Step 4. Plug in the numbers
Total FLOPs = 6 × (1.75 × 10¹¹) × (3 × 10¹¹)
      = 6 × 5.25 × 10²²
      ≈ 3.15 × 10²³ FLOPs.

Step 5. Express the answer in more familiar units
3.15 × 10²³ ≈ 315 zettaFLOPs (1 ZFLOP = 10²¹ FLOPs).

Step 6. Sanity check with hardware-time intuition
An NVIDIA A100 delivers ≈ 312 teraFLOP/s (0.312 PFLOP/s) in mixed precision.
Wall-clock time on a single A100:
 3.15 × 10²³ / 3.12 × 10¹⁴ ≈ 1.0 × 10⁹ s ≈ 32 years.
Running on 1,000 A100s in parallel would take ≈12 days, which is the right ballpark for a large industrial training run.

Final estimate
Training GPT-3 (175 B parameters, 300 B tokens) requires on the order of 3 × 10²³ floating-point operations, i.e. roughly 300 zettaFLOPs.

Vote
2 / 33

AI Board Game Logic

reasoning·medium

Tests an AI's ability to understand game rules and strategy

Kimi Linear 48B A3B Instruct

Kimi Linear 48B A3B Instruct

OpenAI o3

OpenAI o3

Vote
3 / 33

Logic Puzzle

reasoning·medium

Tests an AI's ability to solve a simple but potentially confusing logic puzzle

Kimi Linear 48B A3B Instruct

Kimi Linear 48B A3B Instruct

Sally has 2 sisters.

Here's the reasoning:
Sally is one of the sisters. Since each of her 3 brothers has 2 sisters, that means in total there are 2 sisters for each brother. Since Sally is one of them, the other sister must be another girl.

OpenAI o3

OpenAI o3

Let
• S = number of sisters in the family (girls)
• B = number of brothers in the family (boys)

Given

  1. Sally has 3 brothers ⇒ B = 3.
  2. Each brother has 2 sisters ⇒ every boy sees exactly 2 girls in the sibling group.
    Therefore S = 2.

Since Sally herself is one of those 2 sisters, the number of sisters Sally has (other than herself) is

2 – 1 = 1.

Sally has 1 sister.

Vote
Loading