Loading share button...

Z.AI: GLM 4.6vsGPT-4.1 Nano

36 challenges

Why Z.AI: GLM 4.6?

6 months newerSep 2025

Why GPT-4.1 Nano?

4.3x cheaper overall$0.10/M in · $0.40/M out
5.2x more context1.0M
3 / 36

Estimate Complexity

reasoning·hard

Tests an AI's ability to make educated estimates based on technical knowledge

Z.AI: GLM 4.6

Z.AI: GLM 4.6

Of course. Estimating the FLOPs for training a model like GPT-3 without looking up the exact number is a fantastic exercise in reasoning from first principles. Here is a step-by-step breakdown of how to arrive at a very close estimate.

Step 1: Understand the Core Components of the Calculation

The total computational cost for training a neural network is primarily determined by three factors:

  1. N (Number of Parameters): The total number of trainable weights and biases in the model. For GPT-3, this is a well-known figure.
  2. C (Number of Training Tokens): The total amount of data the model processes, measured in tokens (sub-word pieces). This is the most difficult variable to estimate.
  3. K (FLOPs per Parameter per Token): A constant that represents the number of floating-point operations required to process a single token for every parameter in the model for a full training step (forward pass, backward pass, and optimizer update).

The total FLOPs can be approximated by the formula:

Total FLOPs ≈ K * N * C

Let's estimate each of these components.


Step 2: Estimate the Number of Parameters (N)

This is the easiest part. The size of GPT-3's largest model is one of its most famous specifications.

  • N = 175 billion parameters
  • N = 175 x 10⁹

Step 3: Estimate the FLOPs per Parameter per Token (K)

Training involves a forward pass, a backward pass (to calculate gradients), and a parameter update. A common and well-accepted approximation in deep learning is:

  • Forward Pass: For each parameter, you perform a multiplication and an addition. This is roughly 2 FLOPs.
  • Backward Pass: Calculating the gradients is computationally more intensive. It's generally estimated to be about 2x the cost of the forward pass. So, this is roughly 4 FLOPs.
  • Parameter Update: The optimizer (like Adam) updates the weights. This also involves a multiply and an add for each parameter, which is another 2 FLOPs.

Adding these up: 2 (forward) + 4 (backward) + 2 (update) = 8 FLOPs.

However, a more standard and slightly more conservative estimate used in many research papers (like the Chinchilla scaling laws paper) is K = 6. This typically accounts for the forward pass (2 FLOPs) and the two main parts of the backward pass (2 FLOPs + 2 FLOPs), often bundling the optimizer's smaller cost into this figure. Using K=6 is a robust choice for an estimation of this scale.

  • K = 6

Step 4: Estimate the Number of Training Tokens (C)

This is the most challenging and crucial estimation step. We need to reason about the composition of the training dataset.

GPT-3 was trained on a mixture of sources: Common Crawl, WebText2, Books1, Books2, and Wikipedia. Let's try to build a plausible number.

  1. Wikipedia: The entire English Wikipedia has about 4 billion tokens. This is a small part of the total dataset.
  2. Books: The dataset is said to contain a large number of books. Let's imagine it's the equivalent of 1 million books. An average book might be 100,000 words. There are about 1.3 tokens per word.
    • Tokens per book ≈ 100,000 words * 1.3 tokens/word = 130,000 tokens
    • Total book tokens ≈ 1,000,000 books * 130,000 tokens/book = 130 billion tokens.
  3. Web-based Data (Common Crawl & WebText2): This is the largest component. Common Crawl is massive (petabytes of raw data), but it's heavily filtered for quality. The goal is to get a high-quality subset.
    • Let's think about the scale. The number of tokens in the web-based data must be significantly larger than the book corpus. A number in the hundreds of billions seems reasonable.
    • Could it be 200 billion? 300 billion? 500 billion?
    • A helpful sanity check comes from the Chinchilla scaling laws, which suggest a compute-optimal training data size of about 20 tokens per parameter. For GPT-3, this would be 20 * 175 billion = 3.5 trillion tokens. However, GPT-3 predates Chinchilla* and is known to have been "under-trained"—meaning it was trained on significantly fewer tokens than this optimal ratio would suggest.
    • This tells us the number should be well below 3.5 trillion. A number in the low hundreds of billions is much more likely.

Let's combine our estimates:

  • Wikipedia: ~4 billion
  • Books: ~130 billion
  • Web Data: Let's estimate this to be in the same ballpark as the books, but larger. Let's say ~170 billion.

Total Estimated Tokens (C) = 4 + 130 + 170 = 304 billion tokens

This is a very strong, defensible estimate. Let's round it for easier calculation.

  • C = 300 x 10⁹ tokens

Step 5: Perform the Final Calculation

Now we plug our estimates into the formula:

Total FLOPs ≈ K * N * C Total FLOPs ≈ 6 * (175 x 10⁹) * (300 x 10⁹)

Let's do the math:

  1. 6 * 175 = 1050
  2. 1050 * 300 = 315,000
  3. 10⁹ * 10⁹ = 10¹⁸

Total FLOPs ≈ 315,000 x 10¹⁸

To write this in standard scientific notation:

Total FLOPs ≈ 3.15 x 10²³

Final Estimate and Conclusion

Estimated Total FLOPs to train GPT-3: 3.15 x 10²³

To put this enormous number in perspective:

  • This is 315 ZettaFLOPs.
  • If you had a petaflop-scale (10¹⁵ FLOP/s) AI supercomputer running at 100% utilization, it would take 3.15 x 10²³ / 10¹⁵ = 3.15 x 10⁸ seconds, which is about 10 years. In reality, they used thousands of GPUs for several weeks.

This estimation, built from first principles and reasonable assumptions about the model and its dataset, arrives at a number that is remarkably close to the official figure cited in the original GPT-3 paper, which is ~3.14 x 10²³ FLOP-days. The key was correctly identifying the parameters, using the standard 6*N*C formula, and making a logical, grounded estimate of the training data size.

GPT-4.1 Nano

GPT-4.1 Nano

Let's estimate the total number of floating-point operations (FLOPs) required to train GPT-3, based on what is known about its size, training data, and typical training procedures. Here's a step-by-step reasoning process:

1. Understand the model size and parameters:

  • GPT-3 has approximately 175 billion parameters.
  • Parameters are stored in FP32 or FP16 precision; often training uses mixed precision to reduce FLOPs slightly.

2. Determine the training dataset size:

  • GPT-3 was trained on hundreds of billions of tokens. Reports suggest around 300 billion tokens.
  • For simplicity, assume 300 billion tokens.

3. Estimate the number of training steps:

  • To process 300 billion tokens, depending on batch size:

    Suppose:

    • Batch size = 1 million tokens (a plausible large batch size for such training, approximating high-performance hardware)

    Number of steps = Total tokens / Batch size:

    ( ext{Steps} = rac{300 imes 10^9}{10^6} = 300,000 )

    But in practice, the batch size per GPU (or per node) is smaller—say 1,000 tokens per batch, with multiple GPUs. The total number of steps is roughly on the order of 300,000 to 500,000.

    To be conservative, take approximately 300,000 steps for total training.

4. FLOPs per forward and backward pass:

  • For each token, the transformer computes several matrix multiplications.
  • Typically, the dominant cost per token in a transformer model scales with the model size.

A common approximation for FLOPs per token for training a transformer is:

( ext{FLOPs per token} approx 2 imes ext{number of parameters} imes ext{sequence length} )

But this depends on the specifics of the architecture. Historical estimates suggest:

  • For large transformer models, about 6 FLOPs per parameter per token are required for training (this accounts for both forward and backward passes). This is a rough estimate from various literature.

Thus,

( ext{FLOPs per token} approx 6 imes ext{parameters} )

Plugging in GPT-3's parameters:

( 6 imes 175 imes 10^9 = 1.05 imes 10^{12} ext{ FLOPs per token} )

Note: This seems high; more refined estimates consider the overall cost per token is roughly on the order of a few trillion FLOPs, depending on the implementation.

Alternatively, a typical approximation used in literature (e.g., OpenAI's blog) is about 3 to 5 FLOPs per parameter per token.

Let's take 3 FLOPs per parameter per token for a more conservative estimate:

( 3 imes 175 imes 10^9 = 525 imes 10^9 = 5.25 imes 10^{11} ext{ FLOPs per token} )

Similarly, for total FLOPs:

( ext{Total FLOPs} approx ext{number of tokens} imes ext{FLOPs per token} )

Using 300 billion tokens and 5.25 × 10^{11} FLOPs per token:

( ext{Total FLOPs} approx 300 imes 10^9 imes 5.25 imes 10^{11} ) ( = (300 imes 5.25) imes 10^{20} ) ( = 1575 imes 10^{20} = 1.575 imes 10^{23} )

5. Adjust for training efficiency and hardware:

  • Not every FLOP is used efficiently.
  • Typically, the total number of FLOPs gives an upper bound.
  • FLOPs estimates often consider the entire training process, including multiple epochs, but most models are trained with many passes over the data; in this case, since the tokens are large and the data volume is huge, training is roughly a single pass or a few passes.

6. Final estimate:

Given the approximations, a rough but reasonable total FLOPs estimate for training GPT-3:

( oxed{ ext{Total FLOPs} sim 10^{23} ext{ FLOPs} } )

Summary:

  • Parameters: 175 billion
  • Tokens processed: ~300 billion
  • FLOPs per token: about 5 × 10^{11}
  • Total FLOPs: approximately 10^{23} FLOPs

Answer: The total number of FLOPs required to train GPT-3 is roughly on the order of 10^23 FLOPs. This estimate considers GPT-3's 175 billion parameters, training on approximately 300 billion tokens, and accounts for the computational cost per token during training.

Vote
2 / 36

AI Board Game Logic

reasoning·medium

Tests an AI's ability to understand game rules and strategy

Z.AI: GLM 4.6

Z.AI: GLM 4.6

GPT-4.1 Nano

GPT-4.1 Nano

Vote
3 / 36

Logic Puzzle

reasoning·medium

Tests an AI's ability to solve a simple but potentially confusing logic puzzle

Z.AI: GLM 4.6

Z.AI: GLM 4.6

Sally has 1 sister.

Here's why:

The family is made up of 3 brothers and 2 sisters. Sally is one of those sisters. The other sister is also a sister to all three brothers.

GPT-4.1 Nano

GPT-4.1 Nano

Sally has 2 sisters.

Here's the reasoning:

  • Sally has 3 brothers.
  • Each brother has 2 sisters.
  • Since Sally is one of those sisters, the other sister is the other girl brothers share.

Therefore, Sally has 2 sisters in total.

Vote
Loading