Qwen: Qwen3 235B A22B 2507 vs Gemini 2.5 Pro (I/O Edition)
Compare Qwen: Qwen3 235B A22B 2507 by Qwen against Gemini 2.5 Pro (I/O Edition) by Google AI, tested across 26 shared challenges. Updated February 2026.
Compare Qwen: Qwen3 235B A22B 2507 by Qwen against Gemini 2.5 Pro (I/O Edition) by Google AI, tested across 26 shared challenges. Updated February 2026.
26 challenges
Tests an AI's ability to make educated estimates based on technical knowledge
Okay, let's estimate the FLOPs for training GPT-3. I'll break this down.
Key Formula:
The number of FLOPs for training a transformer-based model can be roughly estimated as:
FLOPs ≈ 6 * N * D
Where:
N is the number of parameters in the model.D is the total number of tokens processed during training (dataset size in tokens * number of epochs).6 comes from:
2 * N FLOPs per token for the forward pass (each parameter is involved in roughly one multiply-accumulate, and a MAC is 2 FLOPs).4 * N FLOPs per token for the backward pass (gradient computation is roughly twice as expensive as the forward pass).Assumptions for GPT-3 (Davinci, the largest version):
Number of Parameters (N):
N = 175 * 10^9.Number of Tokens Processed (D):
D.D = 300 * 10^9.Calculation:
Now, let's plug these into the formula:
FLOPs ≈ 6 * N * D
FLOPs ≈ 6 * (175 * 10^9) * (300 * 10^9)
Let's do the numerical part:
6 * 175 = 1050
1050 * 300 = 315,000
Now, the powers of 10:
10^9 * 10^9 = 10^18
So, combining these:
FLOPs ≈ 315,000 * 10^18
FLOPs ≈ 3.15 * 10^5 * 10^18
FLOPs ≈ 3.15 * 10^(5+18)
FLOPs ≈ 3.15 * 10^23
Expressing the Result:
So, 3.15 * 10^23 FLOPs can be written as:
315 * 10^21 FLOPs = 315 ZettaFLOPs0.315 * 10^24 FLOPs = 0.315 YottaFLOPsThe number is often quoted in PetaFLOP/s-days or similar.
Let's convert 3.15 * 10^23 FLOPs to PetaFLOP-seconds:
3.15 * 10^23 / 10^15 = 3.15 * 10^8 PetaFLOP-seconds.
Number of seconds in a day = 24 hours/day * 60 minutes/hour * 60 seconds/minute = 86,400 seconds/day.
So, 3.15 * 10^8 PetaFLOP-seconds / 86,400 seconds/day ≈ 3.15 * 10^8 / (8.64 * 10^4) PetaFLOP-days
≈ (3.15 / 8.64) * 10^(8-4) PetaFLOP-days
≈ 0.3646 * 10^4 PetaFLOP-days
≈ 3646 PetaFLOP/s-days (if executed on a machine capable of 1 PFLOP/s constantly for that many days, or scaled accordingly).
Reasoning Summary:
FLOPs ≈ 6 * N * D for transformer training.6 * (175 * 10^9) * (300 * 10^9) = 3.15 * 10^23 FLOPs.Caveats:
6ND is an approximation. The exact number can vary slightly based on the specific transformer architecture details (e.g., use of sparse attention, specific normalization layers), optimizer overhead, and whether techniques like activation checkpointing are used (which can increase FLOPs for recomputation but save memory).D is an estimate. The GPT-3 paper describes a complex data mixture and weighting schedule, so "300 billion tokens" is an effective number of tokens processed.Therefore, my estimate is approximately 3.15 x 10^23 FLOPs or 315 ZettaFLOPs.
Tests an AI's ability to solve a simple but potentially confusing logic puzzle
Let's break this down:
So, Sally has 1 sister.
Tests an AI's randomness and creativity
Here are 5 jokes:
Why don't scientists trust atoms? Because they make up everything!
What do you call a fish with no eyes? Fsh!
I told my wife she was drawing her eyebrows too high. She seemed surprised.
Why did the scarecrow win an award? Because he was outstanding in his field!
What do you call a lazy kangaroo? Pouch potato!
All 5 of these jokes are unique.
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's ability to simulate personalities and predict future trends
Tests an AI's humor and understanding of current events
Tests an AI's ability to generate a complete, working landing page
Recreate an interactive, classic Mario level in a single HTML file.
Tests an AI's ability to create smooth web animations
Tests an AI's UI design skills with theming support
Tests an AI's ability to engage in philosophical discourse about AI rights
Generate a single-page, self-contained HTML webapp using Tailwind CSS for a randomly chosen category/industry/niche.