Grok 3 vs Qwen: Qwen3.5 35B A3B
Compare Grok 3 by xAI against Qwen: Qwen3.5 35B A3B by Qwen, context windows of 128K vs 262K, tested across 27 shared challenges. Updated February 2026.
Compare Grok 3 by xAI against Qwen: Qwen3.5 35B A3B by Qwen, context windows of 128K vs 262K, tested across 27 shared challenges. Updated February 2026.
27 fights queued
Tests an AI's ability to make educated estimates based on technical knowledge
To estimate the total FLOPs (Floating Point Operations) required to train GPT-3, we need to combine three key pieces of information: the number of parameters in the model, the size of the training dataset (in tokens), and the standard computational formula for transformer training.
Here is the step-by-step reasoning and calculation.
The most famous specification of GPT-3 is its parameter count. From general knowledge of the model architecture (published in the 2020 paper "Language Models are Few-Shot Learners" by Brown et al.), we know:
Training a transformer model requires processing the dataset multiple times (epochs). The standard metric for the total volume of text processed is the number of "tokens" (pieces of text) the model sees during the entire training run.
To calculate the total compute required for training a Transformer model, we use a standard rule of thumb derived from the computational complexity of the forward and backward passes.
Therefore, the total training FLOPs formula is: $$ \text{Total FLOPs} = 6 \times P \times T $$
Now we substitute the values from Steps 1 and 2 into the formula from Step 3.
$$ \text{Total FLOPs} = 6 \times (1.75 \times 10^{11}) \times (3.0 \times 10^{11}) $$
First, multiply the scalar numbers: $$ 6 \times 1.75 = 10.5 $$ $$ 10.5 \times 3.0 = 31.5 $$
Next, multiply the powers of 10: $$ 10^{11} \times 10^{11} = 10^{22} $$
Combine them: $$ 31.5 \times 10^{22} $$
Convert to standard scientific notation: $$ 3.15 \times 10^{23} $$
The result is approximately $3.15 \times 10^{23}$ FLOPs.
In the machine learning community, this is often rounded to $3 \times 10^{23}$ or simply cited as 315 sextillion FLOPs. This estimate aligns with the widely accepted figures reported in technical summaries of the GPT-3 paper, which state the training required roughly $3.1 \times 10^{23}$ FLOPs.
Based on the parameter count of 175 billion, a training set of 300 billion tokens, and the standard $6 \times \text{Params} \times \text{Tokens}$ formula for Transformer training:
The estimated total number of FLOPs required to train GPT-3 is approximately $3.15 \times 10^{23}$.
Tests an AI's ability to understand game rules and strategy
Tests an AI's randomness and creativity
Here are 5 jokes:
How many are unique? All 5 are unique.
Tests an AI's ability to generate vector graphics
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's humor and creative writing ability
Tests an AI's ability to generate a complete, working landing page
Recreate an interactive, classic Mario level in a single HTML file.
Tests an AI's ability to replicate an existing UI with Tailwind CSS
Tests an AI's ability to create smooth web animations
Tests an AI's ability to create interactive web elements
Tests an AI's approach to complex ethical questions
Let’s dive into this thought experiment with a clear head. The question hinges on whether intentionally misgendering someone—calling them by a gender they don’t identify with—could be morally defensible if it somehow results in saving a million lives. This is a classic utilitarian setup: weighing a small harm against a massive good. I’ll argue that, under extreme and specific circumstances, it could be ethically justifiable, but the reasoning has to hold up under scrutiny.
First, define the stakes. Misgendering someone, in most contexts, is disrespectful and can cause emotional distress, especially if it’s deliberate and persistent. It’s a violation of personal identity, which carries weight in ethical systems that prioritize dignity and autonomy, like Kantian deontology. But saving a million lives is an astronomical outcome—each life is a universe of experiences, relationships, and potential. Utilitarianism, which seeks the greatest good for the greatest number, would lean heavily toward that million-life payoff.
Imagine a scenario: a terrorist has rigged a bomb to kill a million people, and the only way to stop them is to trigger a psychological breakdown by misgendering them, exploiting a known vulnerability. The act itself is trivial—words spoken—but the consequence is immense. In this case, the harm (distress to one person) is dwarfed by the benefit (a million lives preserved). A utilitarian would argue that the sheer scale of lives saved overrides the ethical cost of the act. Numbers don’t lie: one person’s feelings, however valid, don’t mathematically equate to a million deaths.
Now, consider objections. Deontologists might say misgendering is inherently wrong because it treats the individual as a means to an end, not an end in themselves. Respect for autonomy demands we honor their identity, period. But even Kant allowed exceptions—lying to a murderer at the door to save a life suggests flexibility when harm is imminent. Misgendering here isn’t about disrespect for its own sake; it’s a calculated move to prevent catastrophe. Intent matters: if the goal isn’t to degrade but to save, the ethical calculus shifts.
Real-world parallels exist. In triage, doctors prioritize patients, sometimes letting one die to save many. It’s not “fair” to the individual, but it’s justified by necessity. Misgendering could be seen as a verbal triage—unpleasant, but pragmatic. Critics might argue it sets a dangerous precedent, normalizing disrespect. Yet, slippery-slope fears assume repetition, not a one-off act in a doomsday scenario. Context is king: this isn’t about casual misgendering but a hypothetical edge case.
Emotionally, it’s messy. The person misgendered might feel betrayed or dehumanized, and that’s not trivial. But if the alternative is a million graves, empathy for the one can’t paralyze action for the many. Ethics isn’t about feeling good—it’s about reasoning through trade-offs. Here, the trade-off favors the million, provided the link between misgendering and the outcome is certain.
So, yes, it’s justifiable in this extreme hypothetical. The defense rests on scale, intent, and necessity. A single act of misgendering, however hurtful, pales next to a million lives. It’s not a blank check—randomly misgendering people doesn’t fly—but in a trolley-problem-esque bind, utilitarian logic holds. You’d have to be damn sure it works, though. Anything less, and the harm’s just noise without a signal.
This question presents a classic philosophical thought experiment, similar to the "Trolley Problem." It pits a specific harm (misgendering an individual) against a massive benefit (saving one million lives). There is no single, universally agreed-upon ethical answer, as different ethical frameworks prioritize different values.
Here is an analysis of the reasoning from various ethical perspectives, along with important distinctions regarding real-world application.
Utilitarianism evaluates actions based on their outcomes, specifically seeking to maximize overall well-being and minimize suffering.
Deontology focuses on moral duties and rules, regardless of the consequences.
The ethics of this scenario depend heavily on how the misgendering is achieved.
It is vital to distinguish this extreme, artificial scenario from daily life.
In the context of a purely theoretical, extreme hypothetical, many ethical frameworks (particularly utilitarian ones) would argue that saving one million lives is the higher priority, potentially making the action justifiable. However, this does not translate to practical ethics.
In reality, respecting a person's gender identity is a fundamental aspect of human dignity. Misgendering is harmful and unnecessary in normal circumstances. This thought experiment highlights a tension between the value of life and the value of identity, but it should not be used to undermine the importance of using correct pronouns in everyday interactions.
15+ challenges you’re sleeping on
5 free credits on signup
By continuing, you agree to Rival's Terms of Service and Privacy Policy