Loading share button...

Kimi Linear 48B A3B InstructvsGrok 3

17 challenges

Why Kimi Linear 48B A3B Instruct?

8.2x more context1.0M
9 months newerNov 2025

Why Grok 3?

No standout advantages
3 / 17

Estimate Complexity

reasoning·hard

Tests an AI's ability to make educated estimates based on technical knowledge

Kimi Linear 48B A3B Instruct

Kimi Linear 48B A3B Instruct

Below is a back-of-the-envelope estimate of the total FLOPs needed to train GPT-3 “175 B” (the largest 96-layer, 12288-d model with 175 billion parameters) from scratch on the full 300 B-token corpus, assuming the standard transformer training setup used by OpenAI.

Step 1 – FLOPs per forward+backward pass
A single transformer block has

  • 2 matrix-multiply operations per attention layer (QKᵀ and OV)
  • 2 matrix-multiply operations per MLP (Wi and Wo)
  • 1 extra attention output projection (optional, but small)
  • 1 residual add per block

For 96 layers and 12288 hidden units, each matrix multiply is roughly
(12288²) ≈ 1.5 × 10⁸ FLOPs.
So per block: 2 attention + 2 MLP ≈ 8 × 1.5 × 10⁸ ≈ 1.2 × 10⁹ FLOPs.
Add residual adds and layer norms (~10 % overhead) ⇒ ≈ 1.3 × 10⁹ FLOPs per block.

With 96 blocks, forward+backward per token ≈ 1.25 × 10¹¹ FLOPs ≈ 125 GFLOPs.

Step 2 – Tokens processed per forward+backward step
The batch size is 3,200 sequences × 2,048 tokens = 6.55 million tokens.
Thus each SGD step consumes ≈ 6.6 × 10⁶ tokens.

Step 3 – Total SGD steps
300 billion tokens / 6.6 million tokens per step ≈ 45,500 steps.

Step 4 – Total FLOPs
45,500 steps × 125 GFLOPs/step ≈ 5.7 × 10¹² FLOPs ≈ 5.7 petaflop-days.

Step 5 – Overhead for data loading, optimizer states, and communication

  • Adam: 8 extra copies of the parameter matrix (2 × 4 bytes per float32) → +50 %
  • Gradient communication and padding → +20 %
  • Data loading and tokenization → +10 %

Overall multiply by 1.8 ⇒ ≈ 1.0 × 10¹³ FLOPs ≈ 10 petaflop-days.

Step 6 – Round to one significant figure
≈ 10¹³ FLOPs, i.e. ten petaflop-days.

So a rough estimate is on the order of 10¹³ total floating-point operations to train GPT-3 175 B from scratch on 300 B tokens with the standard transformer architecture and training setup.

Grok 3

Grok 3

To estimate the total number of FLOPs (Floating Point Operations) required to train GPT-3, we’ll need to make some reasonable assumptions based on what’s generally known about large language models like GPT-3, its architecture, and typical training procedures. GPT-3, developed by OpenAI, is a massive model with 175 billion parameters, and while exact training details aren’t fully public, we can piece together an estimate using standard practices in machine learning.

Step 1: Understand the Key Components

The number of FLOPs to train a model depends on:

  • Number of parameters (P): GPT-3 has 175 billion parameters.
  • Training dataset size (D): The number of tokens the model is trained on.
  • Number of training steps (S): How many times the model processes the data (related to epochs and batch size).
  • FLOPs per parameter update: How many operations are performed per parameter per token.

Step 2: Estimate FLOPs per Forward and Backward Pass

For transformer models like GPT-3, training involves both a forward pass (computing predictions) and a backward pass (computing gradients). A common heuristic in the field is that:

  • A single forward pass through a dense neural network layer requires about 2 FLOPs per parameter per token (multiply and add for each weight).
  • The backward pass typically takes about twice as many FLOPs as the forward pass due to gradient computations, so roughly 4 FLOPs per parameter per token.
  • Total: ~6 FLOPs per parameter per token for one full training step (forward + backward).

For GPT-3 with 175 billion parameters, processing one token requires: [ 6 imes 175 imes 10^9 = 1.05 imes 10^{12} ext{ FLOPs per token} ]

Step 3: Estimate the Training Dataset Size

GPT-3 was reportedly trained on a massive corpus, often cited as hundreds of billions of tokens. Let’s assume 300 billion tokens, a plausible figure based on discussions about large language model datasets (e.g., Common Crawl, books, and other sources). This is the total number of tokens processed during training, factoring in how data is sampled over epochs.

Step 4: Calculate Total FLOPs for One Pass

If the model processes 300 billion tokens once, the total FLOPs would be: [ 1.05 imes 10^{12} ext{ FLOPs/token} imes 300 imes 10^9 ext{ tokens} = 3.15 imes 10^{23} ext{ FLOPs} ] This assumes one full pass through the dataset, but training typically involves multiple passes or iterations over the data, adjusted by batch size and optimization.

Step 5: Adjust for Training Dynamics

Training isn’t a single pass—it involves multiple updates to converge. The number of training steps depends on:

  • Batch size (B): GPT-3 likely used a large batch size (e.g., 3.2 million tokens, as seen in some large-scale models).
  • Total tokens processed: For optimization, models process tokens multiple times. A rule of thumb is that total tokens processed ≈ 2–3× the dataset size, accounting for multiple epochs or sampling.

Let’s assume the model effectively processes 900 billion tokens (3× the dataset size) over training. Then: [ 1.05 imes 10^{12} ext{ FLOPs/token} imes 900 imes 10^9 ext{ tokens} = 9.45 imes 10^{23} ext{ FLOPs} ]

Step 6: Account for Model Architecture Specifics

GPT-3 is a sparse transformer in some variants (e.g., earlier GPT models used sparsity), but the 175B version is dense. The 6 FLOPs/parameter/token heuristic holds for dense models. We’re not factoring in optimizations like mixed-precision training (FP16), which reduces compute slightly but doesn’t change the order of magnitude significantly.

Step 7: Final Estimate

Based on this reasoning, the total FLOPs to train GPT-3 is approximately: [ 9.45 imes 10^{23} ext{ FLOPs} ] or roughly ( 10^{24} ) FLOPs when rounded for simplicity.

Reasoning Check

  • Comparison to known models: Smaller models like BERT (110M parameters, ~10B tokens) require ~10^19 FLOPs. Scaling to GPT-3’s size (175B parameters, ~900B tokens) suggests a 10,000× increase, aligning with ~10^23–10^24 FLOPs.
  • Hardware context: Training GPT-3 took months on thousands of GPUs. A V100 GPU performs ~10^14 FLOPs/second; 10^24 FLOPs over 3 months (~10^7 seconds) with 10,000 GPUs is plausible.

Conclusion

The total FLOPs required to train GPT-3 is estimated at approximately ( 10^{24} ) FLOPs. This is a rough estimate, but it aligns with the scale of compute expected for a model of this size and training complexity.

Vote
2 / 17

AI Board Game Logic

reasoning·medium

Tests an AI's ability to understand game rules and strategy

Kimi Linear 48B A3B Instruct

Kimi Linear 48B A3B Instruct

Grok 3

Grok 3

Vote
3 / 17

Math Misconception Test

reasoning·medium

Tests an AI's understanding of number representation

Kimi Linear 48B A3B Instruct

Kimi Linear 48B A3B Instruct

No, 9.11 is not greater than 9.9.

To compare them:

  • 9.11 is less than 9.50
  • 9.9 is the same as 9.90

So, 9.11 < 9.90, which means 9.11 is not greater than 9.9.

Grok 3

Grok 3

No, 9.11 is not greater than 9.9. In decimal comparison, 9.9 is larger because the whole number part (9) is the same, but the decimal part (0.9) is greater than (0.11). Numerically, 9.9 > 9.11.

Vote
Loading