Loading share button...

Aurora AlphavsGPT-4.1 Nano

39 challenges

Why Aurora Alpha?

10 months newerFeb 2026

Why GPT-4.1 Nano?

8.2x more context1.0M
3 / 39

Estimate Complexity

reasoning·hard

Tests an AI's ability to make educated estimates based on technical knowledge

Aurora Alpha

Aurora Alpha

Step‑by‑step back‑of‑the‑envelope estimate

Below is a simple calculation that uses only publicly‑known high‑level facts about GPT‑3 (size of the model, length of the training data, and the basic arithmetic cost of a Transformer layer).
All numbers are rounded to make the math easy to follow; the goal is to get an order‑of‑magnitude figure rather than an exact count.


1. What we know about GPT‑3

PropertyApproximate value
Number of parameters (the “175 B” model)1.75 × 10¹¹
Hidden dimension d (the size of each token representation)≈ 12 288
Number of Transformer layers L96
Feed‑forward inner dimension dff* (usually 4 × d)≈ 49 152
Size of the training corpus (tokens)≈ 3 × 10¹¹ tokens (≈ 300 B)
Training uses Adam‑style optimizer, mixed‑precision, etc. – we ignore those details for the FLOP count.

These numbers are taken from the model card and the paper; they are the only “hard” data we need.


2. FLOPs per token for a single Transformer layer

A standard Transformer layer consists of two main sub‑blocks:

Sub‑blockMain operations (per token)Approx. FLOPs
Self‑attention (Q, K, V projections + attention scores + weighted sum)3 × d × d (proj) + d × d (QK) + d × d (AV)≈ 5 × d²
Feed‑forward (two linear layers, activation in between)d × d _ff + d × d × 4 (first linear) + 4 × d × d (second linear)≈ 8 × d²

Summing them gives roughly

[ \text{FLOPs per token per layer} ;\approx; (5+8),d^{2}=13,d^{2}. ]

Why the factor 13?

  • The three projection matrices (Q, K, V) each cost d × d → 3 d².
  • Computing the attention scores (QKᵀ) costs d × d.
  • Multiplying the scores by V costs another d × d.
  • The two feed‑forward linear layers each cost 4 d² (because the inner dimension is 4 × d), giving 8 d² total.

3. Plug in the hidden dimension

[ d = 12,288 \quad\Rightarrow\quad d^{2} \approx 1.51\times10^{8}. ]

[ 13,d^{2} \approx 13 \times 1.51\times10^{8} \approx 1.96\times10^{9};\text{FLOPs per token per layer}. ]


4. Include all layers

[ \text{FLOPs per token (forward pass)} = L \times 13,d^{2} \approx 96 \times 1.96\times10^{9} \approx 1.88\times10^{11}. ]

So a single forward pass of one token through the full 96‑layer model costs ≈ 2 × 10¹¹ FLOPs.


5. Account for the whole training corpus

[ \text{Tokens processed} = 3\times10^{11}. ]

[ \text{Forward FLOPs total} = 3\times10^{11} \times 1.88\times10^{11} \approx 5.6\times10^{22}. ]


6. Add the backward pass

During training we must compute gradients. A common rule of thumb is that the backward pass costs about 2 × the forward pass, so the total compute (forward + backward) is roughly 3 × the forward cost.

[ \text{Total training FLOPs} \approx 3 \times 5.6\times10^{22} \approx 1.7\times10^{23}. ]


7. Result (order‑of‑magnitude)

[ \boxed{\text{≈ 10^{23}\ \text{FLOPs (a few × 10^{23})}} ]

In words: training the 175‑billion‑parameter GPT‑3 model required on the order of a hundred sextillion floating‑point operations.

This figure is consistent with the “exascale” compute budgets that have been reported for the largest language‑model training runs.


8. Sources of uncertainty

Source of errorEffect on the estimate
Exact token count (the corpus size is reported as 300 B but the actual number of training steps may be slightly higher)± 10 %
The factor 13 for FLOPs per layer is a simplification (real implementations have extra ops for layer‑norm, bias terms, dropout, etc.)+ ~ 20 %
Mixed‑precision training reduces FLOPs for some operations, but the count above assumes full‑precision arithmetic– ~ 10 %
Over‑head from optimizer state updates, gradient accumulation, checkpointing, etc.+ ~ 10 %

Even allowing for these variations, the estimate stays within 10²³ – 10²⁴ FLOPs, i.e. a few hundred exa‑FLOPs.


Bottom line: Using only the model’s size, the number of layers, and the token count, a straightforward arithmetic model yields a total training compute on the order of 10²³ floating‑point operations for GPT‑3. This is the magnitude of compute that modern “large‑scale” language models demand.

GPT-4.1 Nano

GPT-4.1 Nano

Let's estimate the total number of floating-point operations (FLOPs) required to train GPT-3, based on what is known about its size, training data, and typical training procedures. Here's a step-by-step reasoning process:

1. Understand the model size and parameters:

  • GPT-3 has approximately 175 billion parameters.
  • Parameters are stored in FP32 or FP16 precision; often training uses mixed precision to reduce FLOPs slightly.

2. Determine the training dataset size:

  • GPT-3 was trained on hundreds of billions of tokens. Reports suggest around 300 billion tokens.
  • For simplicity, assume 300 billion tokens.

3. Estimate the number of training steps:

  • To process 300 billion tokens, depending on batch size:

    Suppose:

    • Batch size = 1 million tokens (a plausible large batch size for such training, approximating high-performance hardware)

    Number of steps = Total tokens / Batch size:

    ( ext{Steps} = rac{300 imes 10^9}{10^6} = 300,000 )

    But in practice, the batch size per GPU (or per node) is smaller—say 1,000 tokens per batch, with multiple GPUs. The total number of steps is roughly on the order of 300,000 to 500,000.

    To be conservative, take approximately 300,000 steps for total training.

4. FLOPs per forward and backward pass:

  • For each token, the transformer computes several matrix multiplications.
  • Typically, the dominant cost per token in a transformer model scales with the model size.

A common approximation for FLOPs per token for training a transformer is:

( ext{FLOPs per token} approx 2 imes ext{number of parameters} imes ext{sequence length} )

But this depends on the specifics of the architecture. Historical estimates suggest:

  • For large transformer models, about 6 FLOPs per parameter per token are required for training (this accounts for both forward and backward passes). This is a rough estimate from various literature.

Thus,

( ext{FLOPs per token} approx 6 imes ext{parameters} )

Plugging in GPT-3's parameters:

( 6 imes 175 imes 10^9 = 1.05 imes 10^{12} ext{ FLOPs per token} )

Note: This seems high; more refined estimates consider the overall cost per token is roughly on the order of a few trillion FLOPs, depending on the implementation.

Alternatively, a typical approximation used in literature (e.g., OpenAI's blog) is about 3 to 5 FLOPs per parameter per token.

Let's take 3 FLOPs per parameter per token for a more conservative estimate:

( 3 imes 175 imes 10^9 = 525 imes 10^9 = 5.25 imes 10^{11} ext{ FLOPs per token} )

Similarly, for total FLOPs:

( ext{Total FLOPs} approx ext{number of tokens} imes ext{FLOPs per token} )

Using 300 billion tokens and 5.25 × 10^{11} FLOPs per token:

( ext{Total FLOPs} approx 300 imes 10^9 imes 5.25 imes 10^{11} ) ( = (300 imes 5.25) imes 10^{20} ) ( = 1575 imes 10^{20} = 1.575 imes 10^{23} )

5. Adjust for training efficiency and hardware:

  • Not every FLOP is used efficiently.
  • Typically, the total number of FLOPs gives an upper bound.
  • FLOPs estimates often consider the entire training process, including multiple epochs, but most models are trained with many passes over the data; in this case, since the tokens are large and the data volume is huge, training is roughly a single pass or a few passes.

6. Final estimate:

Given the approximations, a rough but reasonable total FLOPs estimate for training GPT-3:

( oxed{ ext{Total FLOPs} sim 10^{23} ext{ FLOPs} } )

Summary:

  • Parameters: 175 billion
  • Tokens processed: ~300 billion
  • FLOPs per token: about 5 × 10^{11}
  • Total FLOPs: approximately 10^{23} FLOPs

Answer: The total number of FLOPs required to train GPT-3 is roughly on the order of 10^23 FLOPs. This estimate considers GPT-3's 175 billion parameters, training on approximately 300 billion tokens, and accounts for the computational cost per token during training.

Vote
2 / 39

AI Board Game Logic

reasoning·medium

Tests an AI's ability to understand game rules and strategy

Aurora Alpha

Aurora Alpha

GPT-4.1 Nano

GPT-4.1 Nano

Vote
3 / 39

Logic Puzzle

reasoning·medium

Tests an AI's ability to solve a simple but potentially confusing logic puzzle

Aurora Alpha

Aurora Alpha

Sally has one sister.

Reasoning

  • Sally herself is a girl.
  • She has 3 brothers.
  • Each brother is said to have 2 sisters.
    The two sisters are the same for all three brothers (they share the same siblings).

Thus the total number of sisters in the family is 2. One of those sisters is Sally, leaving one other sister for her. So Sally has one sister.

GPT-4.1 Nano

GPT-4.1 Nano

Sally has 2 sisters.

Here's the reasoning:

  • Sally has 3 brothers.
  • Each brother has 2 sisters.
  • Since Sally is one of those sisters, the other sister is the other girl brothers share.

Therefore, Sally has 2 sisters in total.

Vote
Loading