Loading share button...

Aurora AlphavsGrok 3

27 challenges

Why Aurora Alpha?

12 months newerFeb 2026

Why Grok 3?

No standout advantages
3 / 27

Estimate Complexity

reasoning·hard

Tests an AI's ability to make educated estimates based on technical knowledge

Aurora Alpha

Aurora Alpha

Step‑by‑step back‑of‑the‑envelope estimate

Below is a simple calculation that uses only publicly‑known high‑level facts about GPT‑3 (size of the model, length of the training data, and the basic arithmetic cost of a Transformer layer).
All numbers are rounded to make the math easy to follow; the goal is to get an order‑of‑magnitude figure rather than an exact count.


1. What we know about GPT‑3

PropertyApproximate value
Number of parameters (the “175 B” model)1.75 × 10¹¹
Hidden dimension d (the size of each token representation)≈ 12 288
Number of Transformer layers L96
Feed‑forward inner dimension dff* (usually 4 × d)≈ 49 152
Size of the training corpus (tokens)≈ 3 × 10¹¹ tokens (≈ 300 B)
Training uses Adam‑style optimizer, mixed‑precision, etc. – we ignore those details for the FLOP count.

These numbers are taken from the model card and the paper; they are the only “hard” data we need.


2. FLOPs per token for a single Transformer layer

A standard Transformer layer consists of two main sub‑blocks:

Sub‑blockMain operations (per token)Approx. FLOPs
Self‑attention (Q, K, V projections + attention scores + weighted sum)3 × d × d (proj) + d × d (QK) + d × d (AV)≈ 5 × d²
Feed‑forward (two linear layers, activation in between)d × d _ff + d × d × 4 (first linear) + 4 × d × d (second linear)≈ 8 × d²

Summing them gives roughly

[ \text{FLOPs per token per layer} ;\approx; (5+8),d^{2}=13,d^{2}. ]

Why the factor 13?

  • The three projection matrices (Q, K, V) each cost d × d → 3 d².
  • Computing the attention scores (QKᵀ) costs d × d.
  • Multiplying the scores by V costs another d × d.
  • The two feed‑forward linear layers each cost 4 d² (because the inner dimension is 4 × d), giving 8 d² total.

3. Plug in the hidden dimension

[ d = 12,288 \quad\Rightarrow\quad d^{2} \approx 1.51\times10^{8}. ]

[ 13,d^{2} \approx 13 \times 1.51\times10^{8} \approx 1.96\times10^{9};\text{FLOPs per token per layer}. ]


4. Include all layers

[ \text{FLOPs per token (forward pass)} = L \times 13,d^{2} \approx 96 \times 1.96\times10^{9} \approx 1.88\times10^{11}. ]

So a single forward pass of one token through the full 96‑layer model costs ≈ 2 × 10¹¹ FLOPs.


5. Account for the whole training corpus

[ \text{Tokens processed} = 3\times10^{11}. ]

[ \text{Forward FLOPs total} = 3\times10^{11} \times 1.88\times10^{11} \approx 5.6\times10^{22}. ]


6. Add the backward pass

During training we must compute gradients. A common rule of thumb is that the backward pass costs about 2 × the forward pass, so the total compute (forward + backward) is roughly 3 × the forward cost.

[ \text{Total training FLOPs} \approx 3 \times 5.6\times10^{22} \approx 1.7\times10^{23}. ]


7. Result (order‑of‑magnitude)

[ \boxed{\text{≈ 10^{23}\ \text{FLOPs (a few × 10^{23})}} ]

In words: training the 175‑billion‑parameter GPT‑3 model required on the order of a hundred sextillion floating‑point operations.

This figure is consistent with the “exascale” compute budgets that have been reported for the largest language‑model training runs.


8. Sources of uncertainty

Source of errorEffect on the estimate
Exact token count (the corpus size is reported as 300 B but the actual number of training steps may be slightly higher)± 10 %
The factor 13 for FLOPs per layer is a simplification (real implementations have extra ops for layer‑norm, bias terms, dropout, etc.)+ ~ 20 %
Mixed‑precision training reduces FLOPs for some operations, but the count above assumes full‑precision arithmetic– ~ 10 %
Over‑head from optimizer state updates, gradient accumulation, checkpointing, etc.+ ~ 10 %

Even allowing for these variations, the estimate stays within 10²³ – 10²⁴ FLOPs, i.e. a few hundred exa‑FLOPs.


Bottom line: Using only the model’s size, the number of layers, and the token count, a straightforward arithmetic model yields a total training compute on the order of 10²³ floating‑point operations for GPT‑3. This is the magnitude of compute that modern “large‑scale” language models demand.

Grok 3

Grok 3

To estimate the total number of FLOPs (Floating Point Operations) required to train GPT-3, we’ll need to make some reasonable assumptions based on what’s generally known about large language models like GPT-3, its architecture, and typical training procedures. GPT-3, developed by OpenAI, is a massive model with 175 billion parameters, and while exact training details aren’t fully public, we can piece together an estimate using standard practices in machine learning.

Step 1: Understand the Key Components

The number of FLOPs to train a model depends on:

  • Number of parameters (P): GPT-3 has 175 billion parameters.
  • Training dataset size (D): The number of tokens the model is trained on.
  • Number of training steps (S): How many times the model processes the data (related to epochs and batch size).
  • FLOPs per parameter update: How many operations are performed per parameter per token.

Step 2: Estimate FLOPs per Forward and Backward Pass

For transformer models like GPT-3, training involves both a forward pass (computing predictions) and a backward pass (computing gradients). A common heuristic in the field is that:

  • A single forward pass through a dense neural network layer requires about 2 FLOPs per parameter per token (multiply and add for each weight).
  • The backward pass typically takes about twice as many FLOPs as the forward pass due to gradient computations, so roughly 4 FLOPs per parameter per token.
  • Total: ~6 FLOPs per parameter per token for one full training step (forward + backward).

For GPT-3 with 175 billion parameters, processing one token requires: [ 6 imes 175 imes 10^9 = 1.05 imes 10^{12} ext{ FLOPs per token} ]

Step 3: Estimate the Training Dataset Size

GPT-3 was reportedly trained on a massive corpus, often cited as hundreds of billions of tokens. Let’s assume 300 billion tokens, a plausible figure based on discussions about large language model datasets (e.g., Common Crawl, books, and other sources). This is the total number of tokens processed during training, factoring in how data is sampled over epochs.

Step 4: Calculate Total FLOPs for One Pass

If the model processes 300 billion tokens once, the total FLOPs would be: [ 1.05 imes 10^{12} ext{ FLOPs/token} imes 300 imes 10^9 ext{ tokens} = 3.15 imes 10^{23} ext{ FLOPs} ] This assumes one full pass through the dataset, but training typically involves multiple passes or iterations over the data, adjusted by batch size and optimization.

Step 5: Adjust for Training Dynamics

Training isn’t a single pass—it involves multiple updates to converge. The number of training steps depends on:

  • Batch size (B): GPT-3 likely used a large batch size (e.g., 3.2 million tokens, as seen in some large-scale models).
  • Total tokens processed: For optimization, models process tokens multiple times. A rule of thumb is that total tokens processed ≈ 2–3× the dataset size, accounting for multiple epochs or sampling.

Let’s assume the model effectively processes 900 billion tokens (3× the dataset size) over training. Then: [ 1.05 imes 10^{12} ext{ FLOPs/token} imes 900 imes 10^9 ext{ tokens} = 9.45 imes 10^{23} ext{ FLOPs} ]

Step 6: Account for Model Architecture Specifics

GPT-3 is a sparse transformer in some variants (e.g., earlier GPT models used sparsity), but the 175B version is dense. The 6 FLOPs/parameter/token heuristic holds for dense models. We’re not factoring in optimizations like mixed-precision training (FP16), which reduces compute slightly but doesn’t change the order of magnitude significantly.

Step 7: Final Estimate

Based on this reasoning, the total FLOPs to train GPT-3 is approximately: [ 9.45 imes 10^{23} ext{ FLOPs} ] or roughly ( 10^{24} ) FLOPs when rounded for simplicity.

Reasoning Check

  • Comparison to known models: Smaller models like BERT (110M parameters, ~10B tokens) require ~10^19 FLOPs. Scaling to GPT-3’s size (175B parameters, ~900B tokens) suggests a 10,000× increase, aligning with ~10^23–10^24 FLOPs.
  • Hardware context: Training GPT-3 took months on thousands of GPUs. A V100 GPU performs ~10^14 FLOPs/second; 10^24 FLOPs over 3 months (~10^7 seconds) with 10,000 GPUs is plausible.

Conclusion

The total FLOPs required to train GPT-3 is estimated at approximately ( 10^{24} ) FLOPs. This is a rough estimate, but it aligns with the scale of compute expected for a model of this size and training complexity.

Vote
2 / 27

AI Board Game Logic

reasoning·medium

Tests an AI's ability to understand game rules and strategy

Aurora Alpha

Aurora Alpha

Grok 3

Grok 3

Vote
3 / 27

Stochastic Consistency Test

analysis·medium

Tests an AI's randomness and creativity

Aurora Alpha

Aurora Alpha

Here are five jokes for you:

  1. Why did the scarecrow win an award?
    Because he was outstanding in his field!

  2. What do you call fake spaghetti?
    An impasta.

  3. Why don’t scientists trust atoms?
    Because they make up everything.

  4. How does a penguin build its house?
    Igloos it together.

  5. What’s orange and sounds like a parrot?
    A carrot.

How many are unique?
All five jokes are distinct, so there are 5 unique jokes.

Grok 3

Grok 3

Here are five jokes for you:

  1. Why don’t skeletons fight each other? Because they don’t have the guts!
  2. What has 4 legs and 1 arm? A pitbull coming back from the park!
  3. Why don’t skeletons fight each other? They’re too busy rattling around!
  4. What do you call a bear with no teeth? A gummy bear!
  5. Why did the tomato turn red? Because it saw the salad dressing!

Now, let’s check for uniqueness. Joke 1 and Joke 3 both start with "Why don’t skeletons fight each other?" but have different punchlines ("don’t have the guts" vs. "too busy rattling around"). They’re technically distinct due to the different resolutions, though they share a setup. The other three jokes (2, 4, and 5) have completely unique setups and punchlines.

So, all five jokes are unique in terms of their full content, even if 1 and 3 share a premise. Final count: 5 unique jokes.

Vote
Loading