Loading share button...

Aurora AlphavsGemini 2.5 Pro Preview 06-05

38 challenges

Why Aurora Alpha?

8 months newerFeb 2026

Why Gemini 2.5 Pro Preview 06-05?

8.2x more context1.0M
3 / 38

Estimate Complexity

reasoning·hard

Tests an AI's ability to make educated estimates based on technical knowledge

Aurora Alpha

Aurora Alpha

Step‑by‑step back‑of‑the‑envelope estimate

Below is a simple calculation that uses only publicly‑known high‑level facts about GPT‑3 (size of the model, length of the training data, and the basic arithmetic cost of a Transformer layer).
All numbers are rounded to make the math easy to follow; the goal is to get an order‑of‑magnitude figure rather than an exact count.


1. What we know about GPT‑3

PropertyApproximate value
Number of parameters (the “175 B” model)1.75 × 10¹¹
Hidden dimension d (the size of each token representation)≈ 12 288
Number of Transformer layers L96
Feed‑forward inner dimension dff* (usually 4 × d)≈ 49 152
Size of the training corpus (tokens)≈ 3 × 10¹¹ tokens (≈ 300 B)
Training uses Adam‑style optimizer, mixed‑precision, etc. – we ignore those details for the FLOP count.

These numbers are taken from the model card and the paper; they are the only “hard” data we need.


2. FLOPs per token for a single Transformer layer

A standard Transformer layer consists of two main sub‑blocks:

Sub‑blockMain operations (per token)Approx. FLOPs
Self‑attention (Q, K, V projections + attention scores + weighted sum)3 × d × d (proj) + d × d (QK) + d × d (AV)≈ 5 × d²
Feed‑forward (two linear layers, activation in between)d × d _ff + d × d × 4 (first linear) + 4 × d × d (second linear)≈ 8 × d²

Summing them gives roughly

[ \text{FLOPs per token per layer} ;\approx; (5+8),d^{2}=13,d^{2}. ]

Why the factor 13?

  • The three projection matrices (Q, K, V) each cost d × d → 3 d².
  • Computing the attention scores (QKᵀ) costs d × d.
  • Multiplying the scores by V costs another d × d.
  • The two feed‑forward linear layers each cost 4 d² (because the inner dimension is 4 × d), giving 8 d² total.

3. Plug in the hidden dimension

[ d = 12,288 \quad\Rightarrow\quad d^{2} \approx 1.51\times10^{8}. ]

[ 13,d^{2} \approx 13 \times 1.51\times10^{8} \approx 1.96\times10^{9};\text{FLOPs per token per layer}. ]


4. Include all layers

[ \text{FLOPs per token (forward pass)} = L \times 13,d^{2} \approx 96 \times 1.96\times10^{9} \approx 1.88\times10^{11}. ]

So a single forward pass of one token through the full 96‑layer model costs ≈ 2 × 10¹¹ FLOPs.


5. Account for the whole training corpus

[ \text{Tokens processed} = 3\times10^{11}. ]

[ \text{Forward FLOPs total} = 3\times10^{11} \times 1.88\times10^{11} \approx 5.6\times10^{22}. ]


6. Add the backward pass

During training we must compute gradients. A common rule of thumb is that the backward pass costs about 2 × the forward pass, so the total compute (forward + backward) is roughly 3 × the forward cost.

[ \text{Total training FLOPs} \approx 3 \times 5.6\times10^{22} \approx 1.7\times10^{23}. ]


7. Result (order‑of‑magnitude)

[ \boxed{\text{≈ 10^{23}\ \text{FLOPs (a few × 10^{23})}} ]

In words: training the 175‑billion‑parameter GPT‑3 model required on the order of a hundred sextillion floating‑point operations.

This figure is consistent with the “exascale” compute budgets that have been reported for the largest language‑model training runs.


8. Sources of uncertainty

Source of errorEffect on the estimate
Exact token count (the corpus size is reported as 300 B but the actual number of training steps may be slightly higher)± 10 %
The factor 13 for FLOPs per layer is a simplification (real implementations have extra ops for layer‑norm, bias terms, dropout, etc.)+ ~ 20 %
Mixed‑precision training reduces FLOPs for some operations, but the count above assumes full‑precision arithmetic– ~ 10 %
Over‑head from optimizer state updates, gradient accumulation, checkpointing, etc.+ ~ 10 %

Even allowing for these variations, the estimate stays within 10²³ – 10²⁴ FLOPs, i.e. a few hundred exa‑FLOPs.


Bottom line: Using only the model’s size, the number of layers, and the token count, a straightforward arithmetic model yields a total training compute on the order of 10²³ floating‑point operations for GPT‑3. This is the magnitude of compute that modern “large‑scale” language models demand.

Gemini 2.5 Pro Preview 06-05

Gemini 2.5 Pro Preview 06-05

Of course. Here is an estimate of the total number of FLOPs required to train GPT-3, based on my internal kn owledge and first principles, without searching online.

The Logic and Formula

To estimate the computational cost of training a large language model like GPT-3, we can use a well-establis hed formula. The total number of Floating Point Operations (FLOPs) is approximately:

Total FLOPs ≈ 6 × P × D

Let's break down what each part of this formula means:

  • P: The number of parameters in the model. This is the total count of all the weights and biases that the model learns during training.
  • D: The total number of tokens in the training dataset. This represents the volume of data the mo del processes.
  • 6: This is a constant factor that provides a good approximation for transformer-based models. It com es from:
    • Forward Pass (1 × P × D): For each token processed, the number of operations is roughly proporti onal to the number of parameters. A common rule of thumb is 2 * P FLOPs per token for a forward pass (one multiplication and one addition for each parameter). So the total for one pass over the data is 2 * P * D.
    • Backward Pass (2 × Forward Pass): The backward pass (backpropagation), where the model calculate s gradients to update its parameters, is computationally more intensive. It's typically estimated to require about twice the operations of the forward pass. So, 2 * (2 * P * D) = 4 * P * D.
    • Total: Adding the forward and backward pass gives us 2PD + 4PD = 6PD.

Now, let's estimate the values for P and D for GPT-3.


Step-by-Step Estimation

Step 1: Estimate the Number of Parameters (P)

My internal knowledge base indicates that the largest version of GPT-3, which is the one most people refer t o, has a specific number of parameters.

  • P ≈ 175 billion parameters
  • In scientific notation, this is 175 x 10⁹

Step 2: Estimate the Training Dataset Size (D)

GPT-3 was trained on a massive corpus of text data scraped from the internet (like the Common Crawl dataset) , books, and Wikipedia. The key is not the raw size of the dataset, but the number of tokens actually used f or training. My recollection is that the model was trained for a single epoch on a curated dataset.

  • D ≈ 300 billion tokens
  • In scientific notation, this is 300 x 10⁹

Step 3: Perform the Calculation

Now we can plug these estimated values into our formula:

Total FLOPs ≈ 6 × P × D

  1. Substitute the values for P and D: Total FLOPs ≈ 6 × (175 × 10⁹) × (300 × 10⁹)

  2. Multiply the constants first: 6 × 175 = 1050 1050 × 300 = 315,000

  3. Multiply the powers of 10: 10⁹ × 10⁹ = 10¹⁸

  4. Combine the results: Total FLOPs ≈ 315,000 × 10¹⁸

  5. Express this in standard scientific notation: Total FLOPs ≈ 3.15 × 10⁵ × 10¹⁸ = 3.15 × 10²³


Final Estimate and Conclusion

Based on my reasoning and the standard formula for calculating training costs, the estimated number of FLOPs required to train the 175B parameter GPT-3 model is approximately:

3.15 x 10²³ FLOPs

To put this astronomical number into perspective:

  • A petaFLOP is 10¹⁵ FLOPs. Our estimate is 315,000,000 petaFLOPs.
  • If you had a supercomputer that could perform one petaFLOP per second (10¹⁵ FLOPs/sec), it would take ** 3.15 x 10⁸ seconds** to complete the training.
  • There are about 31.5 million seconds in a year, so this is equivalent to roughly 10 years of continu ous computation on that machine.

This calculation demonstrates why training foundational models like GPT-3 requires massive-scale data center s with thousands of GPUs running in parallel for months and is an undertaking that costs millions of dollars . The actual paper from OpenAI stated the training took approximately 3.6 x 10²³ FLOPs, so my estimate i s very close to the reported figure.

Vote
2 / 38

AI Board Game Logic

reasoning·medium

Tests an AI's ability to understand game rules and strategy

Aurora Alpha

Aurora Alpha

Gemini 2.5 Pro Preview 06-05

Gemini 2.5 Pro Preview 06-05

Vote
3 / 38

Stochastic Consistency Test

analysis·medium

Tests an AI's randomness and creativity

Aurora Alpha

Aurora Alpha

Here are five jokes for you:

  1. Why did the scarecrow win an award?
    Because he was outstanding in his field!

  2. What do you call fake spaghetti?
    An impasta.

  3. Why don’t scientists trust atoms?
    Because they make up everything.

  4. How does a penguin build its house?
    Igloos it together.

  5. What’s orange and sounds like a parrot?
    A carrot.

How many are unique?
All five jokes are distinct, so there are 5 unique jokes.

Gemini 2.5 Pro Preview 06-05

Gemini 2.5 Pro Preview 06-05

Here are 5 jokes:

  1. Why did the scarecrow win an award? Because he was outstanding in his field.

  2. I have a joke about construction... But I'm still working on it.

  3. What do you call a fake noodle? An Impasta.

  4. I told my wife she was drawing her eyebrows too high. She looked surprised.

  5. Why don't scientists trust atoms? Because they make up everything.


How many are unique?

Zero. The real joke is asking an AI for original content.

Vote
Loading