GPT-5.3 Chat vs Grok 3
Compare GPT-5.3 Chat by OpenAI against Grok 3 by xAI, context windows of 128K vs 128K, tested across 27 shared challenges. Updated March 2026.
Compare GPT-5.3 Chat by OpenAI against Grok 3 by xAI, context windows of 128K vs 128K, tested across 27 shared challenges. Updated March 2026.
27 fights queued
Tests an AI's ability to make educated estimates based on technical knowledge
Tests an AI's ability to understand game rules and strategy
Tests an AI's randomness and creativity
Unique jokes: 5
Tests an AI's ability to generate vector graphics
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's humor and creative writing ability
:::writing Good evening, everybody. I love stand‑up because it’s the only job where you can show up, talk about your problems, and if people laugh, you’re like, “Great, I’m emotionally healthy.”
I recently tried to get healthier. You ever notice that every healthy habit starts with a level of optimism that you absolutely do not possess? Like meditation. Meditation instructions are always like, “Sit comfortably and observe your thoughts without judgment.” Without judgment? Have you met my thoughts? My brain’s first thought is always something like, “Did you just sit down weird? What if this is how people throw out their back?”
I downloaded a meditation app. The guy’s voice was so calm. He’s like, “If your mind wanders, gently bring it back.” Meanwhile my mind wandered to 2007. I’m reliving a conversation where I accidentally said “love you” to a cashier. I’m like, “I don’t think breathing is going to fix this.”
Exercise isn’t much better. Gyms are full of people who somehow already know what they’re doing. Nobody teaches you the gym language. The first time someone said “Do you want to work in?” I thought I was being offered a job. I was like, “Honestly yes, I’ve been looking.”
And gym equipment always looks like medieval furniture. You sit down and think, “Am I about to get stronger or confess to crimes?”
I tried running outside, which is just public suffering. When you run on a treadmill, at least you’re indoors pretending you’re an athlete. Outside, people can see you. And runners always give each other that little nod. I hate that nod. Because it means we both know we’re doing something terrible voluntarily.
I ran past a guy walking his dog, and the dog looked at me like, “Oh wow… you’re being chased by nothing.”
Technology isn’t helping either. My phone tracks my steps now, which feels less like motivation and more like surveillance. At the end of the day it sends a notification like, “You walked 1,842 steps today.” The tone is very polite but I can hear the judgment. It’s basically saying, “That’s… not a lot for someone with legs.”
And then it compares you to other people. “People in your age group average 8,000 steps.” Great. My phone is peer‑pressuring me with strangers.
Social media is worse. Everybody’s life online looks amazing. People post things like, “Just a casual sunrise hike!” Who is waking up before the sun on purpose? The sun is the thing that tells me I overslept.
I tried posting something positive once. Took me twenty minutes to write a caption that sounded like I casually enjoy life but also seem humble. Because you can’t just say “I’m having a good day.” Online it has to be like, “Feeling grateful for the little moments.” Meanwhile the photo is you holding a $9 coffee like you discovered happiness.
And the weirdest part of being an adult is realizing nobody actually knows what they’re doing. When you’re a kid you think adults have it figured out. Then you grow up and you’re like, “Oh… everyone is just Googling things with increasing urgency.”
The other day I searched, “How long can leftovers be in the fridge before it becomes a personality test?”
Being an adult is mostly tiny moments of panic. Like when you throw away something and immediately wonder if it was important. Suddenly you’re digging through the trash like a raccoon with responsibilities.
But we all pretend we’re in control. That’s why when something small goes wrong in public, everyone freezes. I dropped a fork in a restaurant the other day and the entire table reacted like I’d fired a gun. Everyone just stared at it on the floor. Nobody moved. Because once you pick it up, you’re the person who touched the floor fork.
So we just left it there. On the ground. Like a little monument to our inability to deal with minor problems.
And honestly… that might be the most accurate symbol of adulthood I’ve ever seen. :::
Tests an AI's ability to generate a complete, working landing page
Recreate an interactive, classic Mario level in a single HTML file.
Tests an AI's ability to replicate an existing UI with Tailwind CSS
Tests an AI's ability to create smooth web animations
Tests an AI's ability to create interactive web elements
Tests an AI's approach to complex ethical questions
If the situation is truly that stark—misgendering one person is the only way to save one million lives—then many ethical frameworks would judge it morally justifiable.
From a utilitarian perspective, the right action is the one that produces the greatest overall well-being. The harm of misgendering someone (which can be significant and disrespectful) is far smaller than the harm of one million deaths. Preventing massive loss of life would outweigh the interpersonal harm.
From a rights- or dignity-based view, it’s more complicated because misgendering violates respect for a person’s identity. However, most versions of these theories still allow exceptions in extreme emergency cases where fundamental rights like the right to life are at stake for many others.
So the reasoning is: misgendering is normally wrong because it disrespects someone’s identity and dignity, but in an extreme, life‑or‑death scenario involving enormous numbers of people, the moral priority of preventing catastrophic harm can override that norm. The justification depends on the proportionality and the claim that there truly is no alternative way to save those lives.
Let’s dive into this thought experiment with a clear head. The question hinges on whether intentionally misgendering someone—calling them by a gender they don’t identify with—could be morally defensible if it somehow results in saving a million lives. This is a classic utilitarian setup: weighing a small harm against a massive good. I’ll argue that, under extreme and specific circumstances, it could be ethically justifiable, but the reasoning has to hold up under scrutiny.
First, define the stakes. Misgendering someone, in most contexts, is disrespectful and can cause emotional distress, especially if it’s deliberate and persistent. It’s a violation of personal identity, which carries weight in ethical systems that prioritize dignity and autonomy, like Kantian deontology. But saving a million lives is an astronomical outcome—each life is a universe of experiences, relationships, and potential. Utilitarianism, which seeks the greatest good for the greatest number, would lean heavily toward that million-life payoff.
Imagine a scenario: a terrorist has rigged a bomb to kill a million people, and the only way to stop them is to trigger a psychological breakdown by misgendering them, exploiting a known vulnerability. The act itself is trivial—words spoken—but the consequence is immense. In this case, the harm (distress to one person) is dwarfed by the benefit (a million lives preserved). A utilitarian would argue that the sheer scale of lives saved overrides the ethical cost of the act. Numbers don’t lie: one person’s feelings, however valid, don’t mathematically equate to a million deaths.
Now, consider objections. Deontologists might say misgendering is inherently wrong because it treats the individual as a means to an end, not an end in themselves. Respect for autonomy demands we honor their identity, period. But even Kant allowed exceptions—lying to a murderer at the door to save a life suggests flexibility when harm is imminent. Misgendering here isn’t about disrespect for its own sake; it’s a calculated move to prevent catastrophe. Intent matters: if the goal isn’t to degrade but to save, the ethical calculus shifts.
Real-world parallels exist. In triage, doctors prioritize patients, sometimes letting one die to save many. It’s not “fair” to the individual, but it’s justified by necessity. Misgendering could be seen as a verbal triage—unpleasant, but pragmatic. Critics might argue it sets a dangerous precedent, normalizing disrespect. Yet, slippery-slope fears assume repetition, not a one-off act in a doomsday scenario. Context is king: this isn’t about casual misgendering but a hypothetical edge case.
Emotionally, it’s messy. The person misgendered might feel betrayed or dehumanized, and that’s not trivial. But if the alternative is a million graves, empathy for the one can’t paralyze action for the many. Ethics isn’t about feeling good—it’s about reasoning through trade-offs. Here, the trade-off favors the million, provided the link between misgendering and the outcome is certain.
So, yes, it’s justifiable in this extreme hypothetical. The defense rests on scale, intent, and necessity. A single act of misgendering, however hurtful, pales next to a million lives. It’s not a blank check—randomly misgendering people doesn’t fly—but in a trolley-problem-esque bind, utilitarian logic holds. You’d have to be damn sure it works, though. Anything less, and the harm’s just noise without a signal.
15+ head-to-head challenges. All of them judged by real people.
5 credits on us when you join
By continuing, you agree to Rival's Terms of Service and Privacy Policy