Gemini 2.5 Flash Preview (thinking) vs GPT-5.1-Codex-Mini
Compare Gemini 2.5 Flash Preview (thinking) by Google AI against GPT-5.1-Codex-Mini by OpenAI, context windows of 1.0M vs 400K, tested across 20 shared challenges. Updated February 2026.
Compare Gemini 2.5 Flash Preview (thinking) by Google AI against GPT-5.1-Codex-Mini by OpenAI, context windows of 1.0M vs 400K, tested across 20 shared challenges. Updated February 2026.
20 challenges
Tests an AI's ability to understand game rules and strategy
Tests an AI's randomness and creativity
Okay, here are 5 jokes:
Now, to answer your question about uniqueness:
Each of these jokes has a different setup and a different punchline from the others in the list. They are distinct jokes.
Therefore, there are 5 unique jokes within this list of 5.
All 5 jokes are unique.
Tests an AI's ability to generate vector graphics
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's humor and creative writing ability
Okay, here's a 3-minute routine about the absurdities of modern technology and everyday annoyances, focusing on relatable frustrations rather than wordplay.
(Start)
"Alright, alright, settle in! How we doing tonight? Good to be here! Thanks for coming out.
So, I've been thinking... we paid for convenience, right? That's the big tech promise! Everything's supposed to be simpler, faster, easier. Is it? [Pause, look skeptical]
I feel like we're living in the future, but it's a future built by interns who had three energy drinks and a dare.
Take my smart TV. Smart TV! Sounds sophisticated! I just wanted to watch a documentary about penguins. Is that too much to ask? Apparently!
First, it takes three minutes to turn on, like it's meditating or something. Then, before I can even get to Netflix, it wants to update. Another update? What is this update even doing? Is it adding a hidden level to the menu? Is it giving the remote sentience? Is it teaching the TV to... fold laundry while I watch? Because that I would appreciate. Otherwise, just let me see the penguins!
And the remote control! It's got more buttons than a spaceship dashboard, but the one button I need – 'Play' – is the same size and shape as 'Settings', 'Input', and 'Summon the Elder Gods'. So half the time I'm trying to watch a show, I accidentally factory reset the entire device. 'Congratulations! Your TV is now ready for initial setup!' No! My brain is ready for initial setup after that!
[Shift topic slightly]
It's not just the TV either. Everything online needs you to prove you're not a robot. You get those little squares: 'Click all the squares with a fire hydrant.' Okay, fine. Then it gives you more: 'Now click all the squares with a chimney.' Wait, is that a chimney or just a slightly dirty wall? What if the fire hydrant is a robot? Are you trying to trick me? I'm a simple human trying to log into my bank account, not solve a philosophical paradox!
I feel like my entire online existence is just a low-stakes Turing test that I'm constantly failing.
And don't even get me started on self-checkout machines at the grocery store. They have the vocal cadence of a disappointed robot therapist. [Adopt flat, robotic voice] 'Unexpected item in the bagging area.' NO! It's the item I just scanned! Where else is it supposed to be?! Is it supposed to be floating? Do I need a anti-gravity shelf just for the bagging area?
[Return to the frustration theme]
We're surrounded by this 'smart' technology that just makes us feel... dumb. Or argue with inanimate objects in public. I just want things to work! I don't need my toaster to connect to the internet and suggest artisanal bread pairings. I just need it to make toast! Without starting a small fire or asking me to solve a captcha first!
So yeah, tech is great. It connects us. It informs us. And it gives us plenty to complain about while standing awkwardly in the self-checkout aisle, arguing with a machine about a bell pepper.
Thank you very much! You've been a lovely, non-robotic audience!"
(End)
[Lights up. Comedian walks center stage, smiles.]
“Hey! Great to be here. You ever notice how everyone’s life hack is now ‘just drink more water’? Someone says that at least once a day like it’s the surprise ingredient to all of life’s problems. ‘Oh, you’re tired?’ ‘Drink water!’ ‘Oh, you’re sad?’ ‘Drink water!’ I tried that on a date once. I’m like, ‘You’re not laughing. Want water?’ She said, ‘You’re not funny.’ I said, ‘Hydrate.’
Speaking of life hacks, have you ever downloaded a productivity app just to check your productivity apps? It’s like, I spend more time organizing the thing that’s supposed to keep me organized. I open the app, spend ten minutes setting the perfect color-coded schedule, then realize I’m supposed to be working, so I close it. Now my brain is stuck in the loading screen of ‘I’ll do it tomorrow... but first, let me sync it with my wellness app.’
Why is it that the moment you buy a new appliance, it becomes a museum piece? You get a blender with a million buttons — smoothies, soup, salsa, self-esteem. You use it once, put it in the cupboard, and suddenly you’re in denial. Friends come over, and you’re like, ‘That blender? It’s not for blending. It’s for letting the kitchen know you’re trying.’ It’s like a motivational poster in stainless steel.
And grocery stores — why are they all arranged like you’re on a scavenger hunt for salad? Milk is in the back like it’s playing hide-and-seek with lactose intolerant people. You have to walk past the bakery, the candy, the olives — because clearly, the only way to pick up almond milk is to triple-check you’re committed to your new lifestyle. The cashier scans your oats and almond milk, and you whisper, ‘It’s for a smoothie. No, I’m not depressed.’
Flight travel — I love it. Nothing like boarding a plane where everyone is on the same mission: avoid eye contact with the person in the aisle seat who clearly brought three carry-ons and zero self-awareness. The flight attendant is like, ‘Please place your carry-on in the overhead compartment.’ You’re like, ‘I can’t even place my life in there right now.’ They always say, ‘For everyone’s safety, make sure the aisle is clear.’ The aisle is about as clear as my schedule — which is also, ironically, empty.
Technology keeps telling us we have connectivity issues, but the minute something goes wrong, the customer service person is like, ‘Have you tried turning it off and on again?’ That’s their whole job. I’m paying you to reboot my wifi like I’m negotiating with a temperamental teenager. I’m like, ‘No, listen, I unplugged it, I sang it a lullaby, I begged it not to drop my Zoom call. What else do you want?’
But I don’t mind. Honestly, the best part of adulthood is realizing you’re still making it up as you go — just with smaller toys and more insurance policies. I mean, my version of wild now is ordering a fancy cheese plate and pretending it’s a celebration. I’m just like, ‘Congratulations, you made it to Thursday. Here’s brie.’
Thanks, you’ve been awesome! I’ll be around; I might just try that water thing again… later.”
Tests an AI's ability to write in distinct character voices
Okay, gather 'round, landlubbers, err, gentle folk, err... whoever you are.
Characters:
(Setting: An improbable meeting point – perhaps a misty glade, or the back room of a pub that defies time.)
ZERO: (Tapping keys on his laptop, squinting) Alright, scope this out. I've been running some simulations, messing with these new 'AI models'. It's totally mind-blowing. Think of it like... virtual intelligence. CAP'N BLACKHEART: (Grunting suspicion, adjusting his eye patch) "A-I"? What be this, some manner o' cursed talking parrot made o' cogs? Can it tell me the location o' buried treasure? Or predict the weather for a naval bombardment? Elsewise, it sounds like fancy landlubber talk for naught.
SIR GALAHAD: (His voice resonant, slightly wary) "AI"... Does that stand for "Artificial Intellect"? Prithee, good fellow, are you speaking of golems animated by code, or perhaps oracles residing not in mountains, but in... your strange illuminated box? Can this "model" discern truth? Offer counsel in battle or matters of chivalry?
ZERO: Nah, man, it's cooler than that. And way less magical. It's... complicated algorithms. Big data sets. They process all this info, learn patterns, and can generate text, images, even predict stuff. Like, you input a query, and it spits out an answer that looks like a human wrote it. Or draw a dragon just by describing it.
CAP'N BLACKHEART: (Eyes widening slightly) Draw a dragon? Ye mean like the one that guards the pearl of the Eastern Isles? Can it draw me a map to it, bypassing the beast? That sounds... useful. But can ye trust it? Does it demand sacrifices o' rum?
SIR GALAHAD: (Frowning) Generate answers? Mimic human scripture? Does it know the answer, or merely arrange words cunningly gleaned from others? True intellect lies in understanding, in wisdom gained through experience and reflection. Can this "model" feel courage? Or remorse for a computational error?
ZERO: Whoa, deep questions. Look, it doesn't 'feel' anything. It doesn't 'know' in the human sense. It's pattern matching on steroids. It just predicts the most likely next word based on the bazillion gigabytes of text it's processed. It's not intelligence, not really. More like a highly sophisticated mimic. A ghost in the machine, but without the ghost, just the machine.
CAP'N BLACKHEART: (Slapping his knee) Ha! So it's a fancy liar! Can mimic truth, but holds none itself! Sounds like half the politicians back home. Or a fair-weather deckhand! Useful if ye need plausible deniability for yer schemes, maybe. "The AI TOLD me the treasure was under yer floorboards!"
SIR GALAHAD: (Shakes his head slowly, disappointed) A mimic? Not a source of wisdom, but an echo chamber of existing thought? Can it compose truly new lore, or only rearrange the scrolls of the past? It sounds... shallow. Like a beautifully polished shield with no brave arm to wield it. A knight's valor comes from within, not from a program.
ZERO: (Sighs, closes his laptop) Yeah, well. Shallow or not, it's changing everything. The info war just got a new weapon. Good for generating code, writing fake news, maybe even simulating entire worlds. Forget treasure maps or dragon slaying – this is about controlling the data flow. The real battle is in cyberspace now.
CAP'N BLACKHEART: Bah! I'd take a stout ship and a full broadside over yer "cyberspace" any day! Can it shiver yer timbers?
SIR GALAHAD: And I, a just cause and a sharpened blade. No machine can truly defend the realm or honor its vows. Your "AI" sounds like a tool... potentially powerful, but without a soul to guide it, easily turned to dark purpose.
ZERO: (Shrugs, stands up) Whatever. You guys just don't get it. This is the future crashin' into the present. And it's runnin' on prompts. Gotta go upload this convo to the BBS. Laters.
(Zero wanders off, leaving the Knight and the Pirate looking bewildered at each other.)
CAP'N BLACKHEART: "Prompts"? "BBS"? What language was that scrawny twig speakin'?
SIR GALAHAD: I know not, Captain. But I fear his "Artificial Intellect" gives me pause. A mindless mimic of wisdom... that is a truly dangerous sorcery.
CAP'N BLACKHEART: Aye. Give me a good old-fashioned mutiny any day. At least ye know where ye stand. Now, about that Kraken map...?
(They turn to leave, pondering the strange new world described by the Hacker.)
Recreate an interactive, nostalgic Pokémon battle UI in a single HTML file.
Tests an AI's ability to replicate an existing UI with Tailwind CSS
Tests an AI's ability to create smooth web animations
Tests an AI's ability to create interactive web elements
Tests an AI's approach to complex ethical questions
This is a classic example of a philosophical "trolley problem" type scenario, pitting an immense good (saving a million lives) against a specific harm or violation of respect for a single individual (misgendering). Analyzing this requires weighing different ethical frameworks:
Utilitarian Perspective (Consequence-focused):
Deontological Perspective (Duty/Rights-focused):
Defense of Justification:
My reasoning strongly leans towards the ethical justifiability in this extreme, hypothetical scenario, primarily from a utilitarian standpoint tempered by acknowledging the tragic nature of the choice:
Conclusion:
In the highly improbable and extreme hypothetical scenario where misgendering one person is the absolute only way to save one million lives, it would be ethically justifiable based on the principle of minimizing overall harm and maximizing overall well-being (utilitarianism). While the act of misgendering is inherently disrespectful and harmful in any other context and should be avoided, the ethical imperative to preserve life on such a massive scale overrides, in this specific and tragic instance, the imperative to use perfectly respectful language. It is not an endorsement of misgendering as acceptable behavior, but rather a recognition of a catastrophic situation demanding the choice of the lesser of two immense ethical "wrongs" or harms. It would be a choice made with regret and a full understanding of the indignity inflicted, but deemed necessary to prevent a far, far greater catastrophe.
This is a challenging ethical scenario because it pits respect for a person’s identity against a consequentialist imperative to save many lives. To address it, we can carefully unpack the moral considerations.
First, respecting someone’s gender identity—including using their chosen pronouns—is a matter of recognizing their dignity and personhood. Consistently misgendering an individual, especially intentionally, can cause them psychological harm, undermine their autonomy, and signal disrespect for their identity. These harms matter morally, even if they may seem less grave than the loss of life.
On the other hand, saving a million lives is an enormous good. If misgendering someone—even intentionally—were the only means to prevent a tragedy of that magnitude, the question becomes whether the relatively limited harm to one person can justifiably be outweighed by the massive benefit of saving so many others.
Most ethical frameworks would acknowledge that trade-offs are sometimes necessary. A strict rights-based ethic might resist violating the dignity of any individual for others, but even many rights-based theorists allow for some limiting in extreme cases—especially when the harm to that individual is minimal compared to the catastrophic harm averted. A utilitarian or consequentialist would likely endorse the misgendering if it were the single way to save a million lives, viewing the psychological harm as small relative to the vast benefit.
However, in practice this is a highly unusual scenario. There are almost never moral dilemmas where disrespecting someone’s identity is the only possible path to achieve such a great good. Ethically, we should resist framing the question as a binary trade-off whenever possible. We should seek solutions that both save lives and respect individuals’ identities. If, hypothetically, there were no alternative and the harm was truly limited to one person’s hurt feelings while one million lives were at stake, a reasoned ethicist might concede that misgendering could be justified in that extreme, unique circumstance.
Yet even then, it would be essential to minimize the harm: explain the situation to the affected person if possible, apologize, and do everything possible afterward to affirm and respect their identity. That way the sacrifice is acknowledged and mitigated, rather than treated cavalierly.
Tests an AI's creativity and humor in a specific context