Google: Gemini 3.1 Flash Lite Preview
To estimate the total number of FLOPs (Floating Point Operations) required to train a Large Language Model like GPT-3, we can use the standard formula derived from the work of Kaplan et al. (OpenAI, 2020).
1. The Formula
The training compute $C$ (in FLOPs) can be estimated as: $$C \approx 6 \times N \times D$$ Where:
- $N$ = Number of parameters in the model.
- $D$ = Number of tokens in the training dataset.
- 6 = A constant factor derived from the transformer architecture (2 FLOPs per parameter for the forward pass, 4 FLOPs per parameter for the backward pass).
2. Identifying the Variables for GPT-3
- $N$ (Parameters): GPT-3 has 175 billion parameters ($1.75 \times 10^{11}$).
- $D$ (Tokens): GPT-3 was trained on 300 billion tokens ($3 \times 10^{11}$).
3. Step-by-Step Calculation
Step 1: Calculate the product of $N$ and $D$ $$N \times D = (1.75 \times 10^{11}) \times (3 \times 10^{11})$$ $$N \times D = 5.25 \times 10^{22}$$
Step 2: Apply the constant factor (6) $$C = 6 \times (5.25 \times 10^{22})$$ $$C = 31.5 \times 10^{22}$$
Step 3: Simplify the notation $$C = 3.15 \times 10^{23} \text{ FLOPs}$$
4. Interpretation and Caveats
- The Result: The estimated training compute is approximately $3.14 \times 10^{23}$ FLOPs.
- Why the "6" multiplier? In a standard transformer, for every parameter, you perform one multiply-accumulate operation during the forward pass (2 FLOPs). During the backward pass (to calculate gradients), you perform roughly double that amount of work to calculate gradients for the weights and the inputs (4 FLOPs). Thus, $2 + 4 = 6$.
- Efficiency Losses: This calculation assumes 100% hardware utilization. In reality, training is never 100% efficient due to communication overhead, memory bottlenecks, and non-compute operations (like normalization or activation functions). If we account for a real-world utilization efficiency (often estimated between 30% and 50%), the "actual" number of FLOPs performed by the GPU hardware would be higher, but the theoretical compute requirement remains the standard metric for comparing models.
- Validation: This calculation aligns closely with the original GPT-3 paper (Brown et al., 2020), which cites the training compute as approximately $3.14 \times 10^{23}$ FLOPs.




