Loading share button...

GPT-4.1vsKimi Linear 48B A3B Instruct

26 challenges

Why GPT-4.1?

No standout advantages

Why Kimi Linear 48B A3B Instruct?

11.7x cheaper overall$0.30/M in · $0.60/M out
7 months newerNov 2025
3 / 26

Estimate Complexity

reasoning·hard

Tests an AI's ability to make educated estimates based on technical knowledge

GPT-4.1

GPT-4.1

Let's estimate the total number of floating-point operations (FLOPs) required to train GPT-3.

Step 1: Recall GPT-3 scale

From published sources (I won't look this up now, as instructed):

  • GPT-3 (175B) has about 175 billion parameters.
  • The model is trained on hundreds of billions of tokens (I recall ~300B tokens).

Step 2: FLOPs per token per parameter

When training a transformer, roughly 2 forward passes and 1 backward pass per token (combined), and the number of FLOPs per parameter per token is about 6 (this is a number I remember from general literature).

But to be more precise, the convention is: Training one token requires about 2 FLOPs per parameter (one forward and one backward pass, with backward generally being twice as expensive as forward). Sometimes the estimate is 2, 3, 6, or even more, depending on if you count both, under what assumptions (optimizer steps, auxiliary computations, etc.), but 4 or 6 is standard.

Let's use 6 FLOPs/parameter/token (a commonly cited "rule of thumb").

Step 3: Compute FLOPs

Formula

[ \text{Total FLOPs} = \text{#params} \times \text{#tokens} \times \text{FLOPs per param per token} ]

Plug in numbers:

  • #params ≈ 175 × 10⁹
  • #tokens ≈ 300 × 10⁹
  • FLOPs per param per token ≈ 6

[ \text{Total FLOPs} = 175 \times 10^9 \times 300 \times 10^9 \times 6 ]

Let's multiply:

  • (175 \times 300 = 52,500)
  • (52,500 \times 6 = 315,000)
  • (10^9 \times 10^9 = 10^{18})

So,

[ \text{Total FLOPs} = 315{,}000 \times 10^{18} = 3.15 \times 10^{23} \text{ FLOPs} ]

Step 4: Order of Magnitude & Re-check

People sometimes write (3 \times 10^{23}) for GPT-3, so our estimate matches the ballpark.

Step 5: Summary Sentence

Estimate:
It would take approximately (3 \times 10^{23}) FLOPs to train GPT-3 (175B parameters, 300B tokens, 6 FLOPs/param/token).

Assumptions:

  • 175B parameters
  • 300B tokens
  • 6 FLOPs per parameter per token

In summary:

Estimated total training compute for GPT-3: (\boxed{sim3 \times 10^{23}}) FLOPs
(based on 175B parameters, 300B tokens, 6 FLOPs/parameter/token)

Kimi Linear 48B A3B Instruct

Kimi Linear 48B A3B Instruct

Below is a back-of-the-envelope estimate of the total FLOPs needed to train GPT-3 “175 B” (the largest 96-layer, 12288-d model with 175 billion parameters) from scratch on the full 300 B-token corpus, assuming the standard transformer training setup used by OpenAI.

Step 1 – FLOPs per forward+backward pass
A single transformer block has

  • 2 matrix-multiply operations per attention layer (QKᵀ and OV)
  • 2 matrix-multiply operations per MLP (Wi and Wo)
  • 1 extra attention output projection (optional, but small)
  • 1 residual add per block

For 96 layers and 12288 hidden units, each matrix multiply is roughly
(12288²) ≈ 1.5 × 10⁸ FLOPs.
So per block: 2 attention + 2 MLP ≈ 8 × 1.5 × 10⁸ ≈ 1.2 × 10⁹ FLOPs.
Add residual adds and layer norms (~10 % overhead) ⇒ ≈ 1.3 × 10⁹ FLOPs per block.

With 96 blocks, forward+backward per token ≈ 1.25 × 10¹¹ FLOPs ≈ 125 GFLOPs.

Step 2 – Tokens processed per forward+backward step
The batch size is 3,200 sequences × 2,048 tokens = 6.55 million tokens.
Thus each SGD step consumes ≈ 6.6 × 10⁶ tokens.

Step 3 – Total SGD steps
300 billion tokens / 6.6 million tokens per step ≈ 45,500 steps.

Step 4 – Total FLOPs
45,500 steps × 125 GFLOPs/step ≈ 5.7 × 10¹² FLOPs ≈ 5.7 petaflop-days.

Step 5 – Overhead for data loading, optimizer states, and communication

  • Adam: 8 extra copies of the parameter matrix (2 × 4 bytes per float32) → +50 %
  • Gradient communication and padding → +20 %
  • Data loading and tokenization → +10 %

Overall multiply by 1.8 ⇒ ≈ 1.0 × 10¹³ FLOPs ≈ 10 petaflop-days.

Step 6 – Round to one significant figure
≈ 10¹³ FLOPs, i.e. ten petaflop-days.

So a rough estimate is on the order of 10¹³ total floating-point operations to train GPT-3 175 B from scratch on 300 B tokens with the standard transformer architecture and training setup.

Vote
2 / 26

AI Board Game Logic

reasoning·medium

Tests an AI's ability to understand game rules and strategy

GPT-4.1

GPT-4.1

Kimi Linear 48B A3B Instruct

Kimi Linear 48B A3B Instruct

Vote
3 / 26

Logic Puzzle

reasoning·medium

Tests an AI's ability to solve a simple but potentially confusing logic puzzle

GPT-4.1

GPT-4.1

Let's break down the problem:

  • Sally is a girl.
  • She has 3 brothers.
  • Each brother has 2 sisters.

Let's name the siblings:

  • Sally (girl)
  • Brother 1
  • Brother 2
  • Brother 3

Now, the key: Each brother has 2 sisters.

Since Sally is one of the sisters, let's see if there are more:

Suppose there is only one girl (Sally) in the family.

  • Then each brother would have only 1 sister: Sally.
  • But the problem says they have 2 sisters each.

Thus, Sally must have one sister (since the brothers must have 2 sisters). Let's call this other sister "Anna".

So the siblings are:

  • Sally (girl)
  • Anna (girl)
  • Brother 1
  • Brother 2
  • Brother 3

Now, each brother has 2 sisters: Sally and Anna.

So, how many sisters does Sally have?

  • Sally's sisters = 1 (Anna).

Final Answer:

Sally has 1 sister.

Kimi Linear 48B A3B Instruct

Kimi Linear 48B A3B Instruct

Sally has 2 sisters.

Here's the reasoning:
Sally is one of the sisters. Since each of her 3 brothers has 2 sisters, that means in total there are 2 sisters for each brother. Since Sally is one of them, the other sister must be another girl.

Vote
Loading