Skip to content
Rival
Models
CompareBest ForArena
Sign Up
Sign Up

Compare AI vibes, not scores. Side-by-side outputs across the world's best models.

@rival_tips

Explore

  • Compare Models
  • All Models
  • Leaderboard
  • Challenges

Discover

  • AI Creators
  • AI Tools
  • The Graveyard

Developers

  • Developer Hub
  • MCP Server
  • .llmignore
  • Badges
  • RIVAL Datasets

Connect

  • Methodology
  • Sponsor
  • Partnerships
  • Privacy Policy
  • Terms
  • RSS Feed
© 2026 Rival
Rival
Models
CompareBest ForArena
Sign Up
Sign Up

Compare AI vibes, not scores. Side-by-side outputs across the world's best models.

@rival_tips

Explore

  • Compare Models
  • All Models
  • Leaderboard
  • Challenges

Discover

  • AI Creators
  • AI Tools
  • The Graveyard

Developers

  • Developer Hub
  • MCP Server
  • .llmignore
  • Badges
  • RIVAL Datasets

Connect

  • Methodology
  • Sponsor
  • Partnerships
  • Privacy Policy
  • Terms
  • RSS Feed
© 2026 Rival
  1. Home
  2. Models
  3. Qwen: Qwen3.5 397B A17B
  4. Estimate Complexity

See how Qwen: Qwen3.5 397B A17B compares on this challenge:

  • Qwen: Qwen3.5 397B A17B vs Claude Sonnet 4.6
  • Qwen: Qwen3.5 397B A17B vs Gemini 3.1 Pro Preview
  • Qwen: Qwen3.5 397B A17B vs Z.ai: GLM 5

4 models have responded to the "Estimate Complexity" challenge. Compare their approaches side-by-side on RIVAL. This response is part of RIVAL's open dataset of 5,600+ AI model responses.

Rival
Models
CompareBest ForArena
Sign Up
Sign Up
Back to Qwen: Qwen3.5 397B A17B

Estimate Complexity

Qwen3.5 397B A17B's response to the 'Estimate Complexity' challenge.

Qwen: Qwen3.5 397B A17B
February 16, 2026
Text

?
Prompt

Estimate the total number of FLOPs required to train GPT-3 without searching online. Explain your reasoning step by step.

Qwen: Qwen3.5 397B A17B
Response

qwen3.5-397b-a17b logo
Qwen: Qwen3.5 397B A17BEstimate Complexity
qwen3.5-397b-a17b logo
Qwen: Qwen3.5 397B A17B

To estimate the total number of Floating Point Operations (FLOPs) required to train GPT-3, we can use the standard computational scaling laws for Transformer models. The calculation relies on three key variables: the number of parameters in the model, the number of tokens in the training dataset, and the computational cost per parameter per token.

Here is the step-by-step reasoning and calculation.

Step 1: Identify the Number of Parameters ($N$)

The defining characteristic of GPT-3 is its size. Based on the seminal paper "Language Models are Few-Shot Learners" (Brown et al., 2020), the largest version of GPT-3 has 175 billion parameters. $$N = 175 \times 10^9 = 1.75 \times 10^{11}$$

Step 2: Identify the Training Dataset Size ($D$)

The computational cost scales linearly with the amount of data the model processes. GPT-3 was trained on a diverse corpus of internet text (Common Crawl, WebText, books, Wikipedia, etc.). The authors stated the model was trained on approximately 300 billion tokens. $$D = 300 \times 10^9 = 3.0 \times 10^{11}$$

Step 3: Determine FLOPs per Parameter per Token

To calculate the total operations, we need to know how many FLOPs are performed for a single parameter when processing a single token during training. Training involves both a forward pass (to make a prediction) and a backward pass (to calculate gradients and update weights).

  • Forward Pass: For each parameter, the GPU performs a multiplication and an addition. This counts as 2 FLOPs per parameter.
  • Backward Pass: Calculating gradients is more computationally intensive. It generally requires roughly twice the operations of the forward pass to compute the gradients with respect to the activations and the weights. This counts as approximately 4 FLOPs per parameter.

Total FLOPs per parameter per token: $2 + 4 = 6$.

Note: This heuristic ($6ND$) is widely accepted in deep learning literature, specifically referenced in the "Scaling Laws for Neural Language Models" (Kaplan et al., 2020).

Step 4: Calculate Total FLOPs

Now we combine the variables into the formula: $$\text{Total FLOPs} \approx 6 \times N \times D$$

Substitute the values: $$\text{Total FLOPs} \approx 6 \times (1.75 \times 10^{11}) \times (3.0 \times 10^{11})$$

First, multiply the coefficients: $$6 \times 1.75 \times 3.0$$ $$6 \times 3.0 = 18$$ $$18 \times 1.75 = 31.5$$

Next, multiply the exponents: $$10^{11} \times 10^{11} = 10^{22}$$

Combine them: $$\text{Total FLOPs} \approx 31.5 \times 10^{22}$$

Convert to standard scientific notation: $$\text{Total FLOPs} \approx 3.15 \times 10^{23}$$

Step 5: Review and Refine

This estimate assumes a theoretically efficient implementation. In practice, there are minor overheads (embedding lookups, attention mask operations, optimizer states) that might slightly increase this number, but the $6ND$ rule captures the dominant matrix multiplications that consume the vast majority of compute.

Some analyses round the token count or parameter count slightly differently, leading to estimates ranging between $3.0 \times 10^{23}$ and $3.6 \times 10^{23}$. However, $3.15 \times 10^{23}$ is the precise result derived from the standard public specifications of the model.

Final Estimate

The total number of FLOPs required to train GPT-3 is approximately:

$$3.15 \times 10^{23} \text{ FLOPs}$$

(Often colloquially referred to as 315 zettaFLOPs).

Estimate Complexity

Turn this model response into notes, narration, or a short video

Auto NotesTranscriptNarrateVideo Cut

Partner link

This output is part of our open dataset — 5K+ AI responses across 200 models

About Qwen: Qwen3.5 397B A17B

Capabilities

ConversationReasoningCode GenerationAnalysisTool UseAgentic Tool UseTranslation

Categories

TextCodeVisionMultimodal

Specifications

Provider
Qwen
Released
2026-02-16
Size
XLARGE
Parameters
397B (17B active)
Context
262,144 tokens

Keep exploring

SAME PROMPT

Claude Sonnet 4.6's version

Same prompt, different result

COMPARE

Qwen: Qwen3.5 397B A17B vs Gemini 3.1 Pro Preview

Both outputs, side by side

Compare AI vibes, not scores. Side-by-side outputs across the world's best models.

@rival_tips

Explore

  • Compare Models
  • All Models
  • Leaderboard
  • Challenges

Discover

  • AI Creators
  • AI Tools
  • The Graveyard

Developers

  • Developer Hub
  • MCP Server
  • .llmignore
  • Badges
  • RIVAL Datasets

Connect

  • Methodology
  • Sponsor
  • Partnerships
  • Privacy Policy
  • Terms
  • RSS Feed
© 2026 Rival