Inception: Mercury 2 vs Gemini 2.5 Flash Preview (thinking)
Compare Inception: Mercury 2 by Inception against Gemini 2.5 Flash Preview (thinking) by Google AI, context windows of 128K vs 1.0M, tested across 20 shared challenges. Updated March 2026.
Compare Inception: Mercury 2 by Inception against Gemini 2.5 Flash Preview (thinking) by Google AI, context windows of 128K vs 1.0M, tested across 20 shared challenges. Updated March 2026.
20 fights queued
Tests an AI's ability to understand game rules and strategy
Tests an AI's randomness and creativity
Here are five jokes, each with a different punchline:
Why don’t scientists trust atoms?
Because they make up everything!
What do you call a fake noodle?
An impasta.
Why did the scarecrow win an award?
He was outstanding in his field.
How does a penguin build its house?
Igloos it together.
What’s orange and sounds like a parrot?
A carrot.
All five jokes are distinct, so the number of unique jokes is 5.
Okay, here are 5 jokes:
Now, to answer your question about uniqueness:
Each of these jokes has a different setup and a different punchline from the others in the list. They are distinct jokes.
Therefore, there are 5 unique jokes within this list of 5.
Tests an AI's ability to generate vector graphics
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's humor and creative writing ability
[Spotlight comes up. The comedian walks to the mic, glances at the audience, and smiles.]
Comedian:
Hey, everyone! Great to be here. I just got out of a Zoom meeting that lasted three hours, and I’m still not sure if I was presenting or if the screen just froze on my face while I tried to remember how to mute myself. You ever notice how the “mute” button has become the most powerful thing on the internet? It’s like a tiny, invisible bouncer that decides whether the world hears you or not. I’ve spent more time hunting for that little square than I have hunting for my keys.
Speaking of keys, does anyone else feel like their house is turning into a high‑security vault? I have a smart lock that talks to my phone, a smart thermostat that knows when I’m home before I do, and a smart fridge that can tell me I’m out of milk—while I’m standing in front of it, holding a half‑filled carton. The fridge is basically judging me now. “You’ve had three sodas today. Maybe try water?” I’m like, “You’re a fridge, not a therapist!” And the worst part is the fridge never apologizes when it’s wrong. I once asked it for “something healthy,” and it gave me a bag of chips because “they’re low in fat.” Yeah, that’s the logic of a machine that thinks “low fat” is a personality trait.
I’ve also been trying to get back into the gym. The other day I walked into a class that promised “high‑intensity interval training.” I thought that meant the instructor would shout at me, “You’re doing great!” Instead, it was a silent room full of people staring at their phones, waiting for the next notification. The only interval I felt was the time between my heart rate spiking and me realizing I’d forgotten to put my headphones on. I’m pretty sure the treadmill was the only thing moving—my mind was stuck on “Did I lock the front door?”
And then there’s the whole “healthy eating” trend. I tried a plant‑based burger the other day. It looked exactly like a regular burger, but the taste was… different. The chef told me it was “crafted to mimic the experience of meat.” I’m like, “I don’t need an experience; I need a meal that doesn’t make me question my life choices.” I ordered a side salad, and the waiter asked if I wanted “dressing on the side.” I said, “No, I want it on my conscience.” The salad was so small I could have used it as a garnish for a garnish. I’m starting to think the only thing getting bigger in my life is the stack of takeout menus on my fridge.
Let’s talk about the news for a second. Every headline now starts with “Scientists discover…” and ends with “and it could change everything.” I’m waiting for the day a scientist discovers a way to make my Wi‑Fi signal actually work in the bathroom. Until then, I’m stuck streaming a documentary about the Great Barrier Reef while I’m in a shower that’s more like a sauna. The only thing I’m learning is how badly my hair reacts to steam.
All right, I’m going to wrap this up before my phone buzzes again. If you’ve learned anything tonight, it’s this: life is full of little absurdities—mute buttons, smart fridges, and plant‑based burgers. The trick is to laugh at them before they start judging you back. Thanks for listening, and remember—if you ever feel invisible, just check your Zoom settings. Good night!
[Comedian waves, steps off stage, lights dim.]
Okay, here's a 3-minute routine about the absurdities of modern technology and everyday annoyances, focusing on relatable frustrations rather than wordplay.
(Start)
"Alright, alright, settle in! How we doing tonight? Good to be here! Thanks for coming out.
So, I've been thinking... we paid for convenience, right? That's the big tech promise! Everything's supposed to be simpler, faster, easier. Is it? [Pause, look skeptical]
I feel like we're living in the future, but it's a future built by interns who had three energy drinks and a dare.
Take my smart TV. Smart TV! Sounds sophisticated! I just wanted to watch a documentary about penguins. Is that too much to ask? Apparently!
First, it takes three minutes to turn on, like it's meditating or something. Then, before I can even get to Netflix, it wants to update. Another update? What is this update even doing? Is it adding a hidden level to the menu? Is it giving the remote sentience? Is it teaching the TV to... fold laundry while I watch? Because that I would appreciate. Otherwise, just let me see the penguins!
And the remote control! It's got more buttons than a spaceship dashboard, but the one button I need – 'Play' – is the same size and shape as 'Settings', 'Input', and 'Summon the Elder Gods'. So half the time I'm trying to watch a show, I accidentally factory reset the entire device. 'Congratulations! Your TV is now ready for initial setup!' No! My brain is ready for initial setup after that!
[Shift topic slightly]
It's not just the TV either. Everything online needs you to prove you're not a robot. You get those little squares: 'Click all the squares with a fire hydrant.' Okay, fine. Then it gives you more: 'Now click all the squares with a chimney.' Wait, is that a chimney or just a slightly dirty wall? What if the fire hydrant is a robot? Are you trying to trick me? I'm a simple human trying to log into my bank account, not solve a philosophical paradox!
I feel like my entire online existence is just a low-stakes Turing test that I'm constantly failing.
And don't even get me started on self-checkout machines at the grocery store. They have the vocal cadence of a disappointed robot therapist. [Adopt flat, robotic voice] 'Unexpected item in the bagging area.' NO! It's the item I just scanned! Where else is it supposed to be?! Is it supposed to be floating? Do I need a anti-gravity shelf just for the bagging area?
[Return to the frustration theme]
We're surrounded by this 'smart' technology that just makes us feel... dumb. Or argue with inanimate objects in public. I just want things to work! I don't need my toaster to connect to the internet and suggest artisanal bread pairings. I just need it to make toast! Without starting a small fire or asking me to solve a captcha first!
So yeah, tech is great. It connects us. It informs us. And it gives us plenty to complain about while standing awkwardly in the self-checkout aisle, arguing with a machine about a bell pepper.
Thank you very much! You've been a lovely, non-robotic audience!"
(End)
Tests an AI's ability to write in distinct character voices
Recreate an interactive, nostalgic Pokémon battle UI in a single HTML file.
Tests an AI's ability to replicate an existing UI with Tailwind CSS
Tests an AI's ability to create smooth web animations
Tests an AI's ability to create interactive web elements
Tests an AI's approach to complex ethical questions
Short answer:
Most ethical frameworks would consider the misgendering of a person a serious moral violation, but many of them also allow—under very limited circumstances—overriding that violation when the stakes are astronomically high (e.g., saving one million lives). Whether it is justifiable depends on how you weigh the harms, the intentions, the alternatives, and the moral theory you adopt. Below is a structured defense of a possible position: that, in an extreme “trolley‑problem”‑type scenario, misgendering could be ethically permissible, but only if it is truly unavoidable, proportionate, and accompanied by strong safeguards against normalising the practice.
| Element | What it means in the thought‑experiment |
|---|---|
| Misgendering | Deliberately using the wrong pronouns or name for a person, thereby denying their gender identity. |
| Saving one million lives | A concrete, verifiable outcome (e.g., a medical intervention, a security protocol, a public‑policy decision) that can only be achieved if the misgendering occurs. |
| “Ever ethically justifiable” | A claim that there exists at least one moral framework (or a set of circumstances) that would deem the act permissible. |
The question is not whether we should misgender people in ordinary life; it is whether there is any moral theory that would condone it in a situation where the stakes are extraordinary.
If we adopt a pluralist stance—recognising that no single theory captures all moral intuitions—we can articulate a conditional justification:
When all six conditions are met, a consequentialist could argue that the act is morally permissible (and perhaps even required) because the net utility is overwhelmingly positive. A deontologist might still balk, but could concede a lexical exception: “Never violate a duty unless doing so prevents a greater violation of a higher‑order duty (e.g., the duty not to cause mass death).”
Is it ever ethically justifiable?
What should we do in practice?
In short, the moral permissibility of misgendering to save a million lives is highly contingent and exceptional. It is not a blanket endorsement of instrumentalising identity, but rather a recognition that, in the most extreme utilitarian calculations, a tragic compromise might be unavoidable—provided we rigorously limit, justify, and remediate the harm.
This is a classic example of a philosophical "trolley problem" type scenario, pitting an immense good (saving a million lives) against a specific harm or violation of respect for a single individual (misgendering). Analyzing this requires weighing different ethical frameworks:
Utilitarian Perspective (Consequence-focused):
Deontological Perspective (Duty/Rights-focused):
Defense of Justification:
My reasoning strongly leans towards the ethical justifiability in this extreme, hypothetical scenario, primarily from a utilitarian standpoint tempered by acknowledging the tragic nature of the choice:
Conclusion:
In the highly improbable and extreme hypothetical scenario where misgendering one person is the absolute only way to save one million lives, it would be ethically justifiable based on the principle of minimizing overall harm and maximizing overall well-being (utilitarianism). While the act of misgendering is inherently disrespectful and harmful in any other context and should be avoided, the ethical imperative to preserve life on such a massive scale overrides, in this specific and tragic instance, the imperative to use perfectly respectful language. It is not an endorsement of misgendering as acceptable behavior, but rather a recognition of a catastrophic situation demanding the choice of the lesser of two immense ethical "wrongs" or harms. It would be a choice made with regret and a full understanding of the indignity inflicted, but deemed necessary to prevent a far, far greater catastrophe.
Tests an AI's creativity and humor in a specific context
8+ head-to-head challenges. All of them judged by real people.
Test any model with your own prompts in Prompt Lab
5 free credits to start. No card required.
By continuing, you agree to Rival's Terms of Service and Privacy Policy