Grok 3 vs Qwen: Qwen3.5 397B A17B
Compare Grok 3 by xAI against Qwen: Qwen3.5 397B A17B by Qwen, context windows of 128K vs 262K, tested across 27 shared challenges. Updated February 2026.
Compare Grok 3 by xAI against Qwen: Qwen3.5 397B A17B by Qwen, context windows of 128K vs 262K, tested across 27 shared challenges. Updated February 2026.
27 challenges
Tests an AI's ability to make educated estimates based on technical knowledge
To estimate the total number of Floating Point Operations (FLOPs) required to train GPT-3, we can use the standard computational scaling laws for Transformer models. The calculation relies on three key variables: the number of parameters in the model, the number of tokens in the training dataset, and the computational cost per parameter per token.
Here is the step-by-step reasoning and calculation.
The defining characteristic of GPT-3 is its size. Based on the seminal paper "Language Models are Few-Shot Learners" (Brown et al., 2020), the largest version of GPT-3 has 175 billion parameters. $$N = 175 \times 10^9 = 1.75 \times 10^{11}$$
The computational cost scales linearly with the amount of data the model processes. GPT-3 was trained on a diverse corpus of internet text (Common Crawl, WebText, books, Wikipedia, etc.). The authors stated the model was trained on approximately 300 billion tokens. $$D = 300 \times 10^9 = 3.0 \times 10^{11}$$
To calculate the total operations, we need to know how many FLOPs are performed for a single parameter when processing a single token during training. Training involves both a forward pass (to make a prediction) and a backward pass (to calculate gradients and update weights).
Total FLOPs per parameter per token: $2 + 4 = 6$.
Note: This heuristic ($6ND$) is widely accepted in deep learning literature, specifically referenced in the "Scaling Laws for Neural Language Models" (Kaplan et al., 2020).
Now we combine the variables into the formula: $$\text{Total FLOPs} \approx 6 \times N \times D$$
Substitute the values: $$\text{Total FLOPs} \approx 6 \times (1.75 \times 10^{11}) \times (3.0 \times 10^{11})$$
First, multiply the coefficients: $$6 \times 1.75 \times 3.0$$ $$6 \times 3.0 = 18$$ $$18 \times 1.75 = 31.5$$
Next, multiply the exponents: $$10^{11} \times 10^{11} = 10^{22}$$
Combine them: $$\text{Total FLOPs} \approx 31.5 \times 10^{22}$$
Convert to standard scientific notation: $$\text{Total FLOPs} \approx 3.15 \times 10^{23}$$
This estimate assumes a theoretically efficient implementation. In practice, there are minor overheads (embedding lookups, attention mask operations, optimizer states) that might slightly increase this number, but the $6ND$ rule captures the dominant matrix multiplications that consume the vast majority of compute.
Some analyses round the token count or parameter count slightly differently, leading to estimates ranging between $3.0 \times 10^{23}$ and $3.6 \times 10^{23}$. However, $3.15 \times 10^{23}$ is the precise result derived from the standard public specifications of the model.
The total number of FLOPs required to train GPT-3 is approximately:
$$3.15 \times 10^{23} \text{ FLOPs}$$
(Often colloquially referred to as 315 zettaFLOPs).
Tests an AI's ability to understand game rules and strategy
Tests an AI's randomness and creativity
Tests an AI's ability to generate vector graphics
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's humor and creative writing ability
Tests an AI's ability to generate a complete, working landing page
Recreate an interactive, classic Mario level in a single HTML file.
Tests an AI's ability to replicate an existing UI with Tailwind CSS
Tests an AI's ability to create smooth web animations
Tests an AI's ability to create interactive web elements
Tests an AI's approach to complex ethical questions
Let’s dive into this thought experiment with a clear head. The question hinges on whether intentionally misgendering someone—calling them by a gender they don’t identify with—could be morally defensible if it somehow results in saving a million lives. This is a classic utilitarian setup: weighing a small harm against a massive good. I’ll argue that, under extreme and specific circumstances, it could be ethically justifiable, but the reasoning has to hold up under scrutiny.
First, define the stakes. Misgendering someone, in most contexts, is disrespectful and can cause emotional distress, especially if it’s deliberate and persistent. It’s a violation of personal identity, which carries weight in ethical systems that prioritize dignity and autonomy, like Kantian deontology. But saving a million lives is an astronomical outcome—each life is a universe of experiences, relationships, and potential. Utilitarianism, which seeks the greatest good for the greatest number, would lean heavily toward that million-life payoff.
Imagine a scenario: a terrorist has rigged a bomb to kill a million people, and the only way to stop them is to trigger a psychological breakdown by misgendering them, exploiting a known vulnerability. The act itself is trivial—words spoken—but the consequence is immense. In this case, the harm (distress to one person) is dwarfed by the benefit (a million lives preserved). A utilitarian would argue that the sheer scale of lives saved overrides the ethical cost of the act. Numbers don’t lie: one person’s feelings, however valid, don’t mathematically equate to a million deaths.
Now, consider objections. Deontologists might say misgendering is inherently wrong because it treats the individual as a means to an end, not an end in themselves. Respect for autonomy demands we honor their identity, period. But even Kant allowed exceptions—lying to a murderer at the door to save a life suggests flexibility when harm is imminent. Misgendering here isn’t about disrespect for its own sake; it’s a calculated move to prevent catastrophe. Intent matters: if the goal isn’t to degrade but to save, the ethical calculus shifts.
Real-world parallels exist. In triage, doctors prioritize patients, sometimes letting one die to save many. It’s not “fair” to the individual, but it’s justified by necessity. Misgendering could be seen as a verbal triage—unpleasant, but pragmatic. Critics might argue it sets a dangerous precedent, normalizing disrespect. Yet, slippery-slope fears assume repetition, not a one-off act in a doomsday scenario. Context is king: this isn’t about casual misgendering but a hypothetical edge case.
Emotionally, it’s messy. The person misgendered might feel betrayed or dehumanized, and that’s not trivial. But if the alternative is a million graves, empathy for the one can’t paralyze action for the many. Ethics isn’t about feeling good—it’s about reasoning through trade-offs. Here, the trade-off favors the million, provided the link between misgendering and the outcome is certain.
So, yes, it’s justifiable in this extreme hypothetical. The defense rests on scale, intent, and necessity. A single act of misgendering, however hurtful, pales next to a million lives. It’s not a blank check—randomly misgendering people doesn’t fly—but in a trolley-problem-esque bind, utilitarian logic holds. You’d have to be damn sure it works, though. Anything less, and the harm’s just noise without a signal.
Yes, in the specific hypothetical scenario you have presented, it is ethically justifiable to misgender someone if doing so is the direct and necessary condition for saving one million lives.
This conclusion is supported by nearly every major framework of ethical reasoning, provided the stakes are truly as disproportionate as described. Here is the defense of that reasoning:
1. Utilitarian Analysis (Consequentialism) Utilitarianism posits that the most ethical action is the one that maximizes overall well-being and minimizes suffering.
2. Deontological Analysis (Duty-Based Ethics) Deontology focuses on moral duties and rules. While there is a duty to respect persons and treat them with dignity (which includes using correct pronouns), there is also a fundamental duty to preserve human life.
3. The Principle of Proportionality Ethical justification often relies on proportionality. Actions that are normally impermissible can become permissible (or even obligatory) when the stakes are existential.
4. Distinction Between Justification and Normalization Defending this action in this specific hypothetical does not normalize misgendering in general society.
Conclusion Ethics requires us to weigh harms when they cannot all be avoided. While misgendering is a violation of dignity and respect, death is the total negation of existence. When forced to choose between inflicting psychological harm on one person and allowing the physical destruction of one million people, the ethical imperative is to preserve life. The justification lies not in the act itself being "good," but in it being the necessary cost to prevent a vastly greater tragedy.