GPT-5.2 vs Gemini 2.5 Flash Preview (thinking)
Compare GPT-5.2 by OpenAI against Gemini 2.5 Flash Preview (thinking) by Google AI, context windows of 400K vs 1.0M, tested across 20 shared challenges. Updated February 2026.
Compare GPT-5.2 by OpenAI against Gemini 2.5 Flash Preview (thinking) by Google AI, context windows of 400K vs 1.0M, tested across 20 shared challenges. Updated February 2026.
20 challenges
Tests an AI's ability to understand game rules and strategy
Tests an AI's randomness and creativity
Unique jokes: 5.
Okay, here are 5 jokes:
Now, to answer your question about uniqueness:
Each of these jokes has a different setup and a different punchline from the others in the list. They are distinct jokes.
Therefore, there are 5 unique jokes within this list of 5.
Tests an AI's ability to generate vector graphics
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's humor and creative writing ability
I’ve been trying to improve myself lately, which is embarrassing, because self-improvement is just admitting you weren’t doing great. It’s like, “I’m on a journey.” No you’re not. You’re just tired of your own behavior.
So I downloaded one of those habit apps. The app is so optimistic. It’s like, “Day 1: Drink water.” And I’m like, “Relax. I’m not an astronaut. I know about water. I’m just busy ignoring it.”
And every app has the same tone: gentle, supportive, like a therapist who charges you by the notification. “Don’t forget to breathe today.” Thank you, phone. I was going to forget to be alive, but you saved me.
I tried meditation too. People always say, “It changed my life.” That’s a lot of pressure for something that looks like a nap with extra steps. I’m sitting there, trying to clear my mind… and immediately my brain starts acting like a coworker who just discovered talking.
It’s like: “Remember middle school? Here’s a list of things you said wrong, in order. Also, your posture is weird. Also, your teeth are probably loose. Also, what if everyone secretly hates you?” And the meditation guy is like, “If thoughts come, observe them and let them pass.” Mine don’t pass. Mine move in.
And the worst part is the calm voice. “Picture yourself beside a gentle stream.” Great. Now I’m stressed out next to water. Now my anxiety has a scenic view.
I started going on more walks, too, because everyone says walks are good for mental health. That’s true, but nobody tells you the real challenge: walking makes you look approachable.
When you’re walking, everyone thinks you’re available. People stop you with questions like you work for the sidewalk. “Excuse me, do you know where the bank is?” No, but now I’m invested. Now I’m your guide. We’re a team. I’m going to feel guilty if you don’t find that bank.
Also, walking is when your neighbors do their best work. You ever notice that? Neighbors don’t talk when it’s convenient. They talk when you have headphones in, carrying groceries, holding a coffee, and your body language is screaming, “I have a schedule.”
And they hit you with a conversation that has no exit. It starts normal: “Hey, how’s it going?” and then suddenly you’re trapped. “So anyway, my cousin is suing the HOA, and the judge used to date my aunt, and the whole thing started because of a mailbox.” I’m standing there nodding like, “Wow.” Meanwhile my ice cream is in my bag going through a personal crisis.
My favorite is when people ask, “Any plans this weekend?” That question feels like a performance review for your free time. Like if you say, “Not really,” they look concerned, like you just confessed you don’t know how to be a person.
I’ve started lying. “Oh, I’m booked.” Booked with what? Just… thoughts. Appointments with my couch. A very exclusive event called “Avoiding Everyone.”
And I’ve realized adulthood is mostly trying to seem like you have a system. Like you’ll say stuff like, “I’m really focusing on routines.” And what you mean is, “I ate cereal for dinner and I hope nobody finds out.”
But I am trying. I’m drinking more water. I’m walking. I’m meditating. I’m becoming that person who says things like, “I’m just listening to my body.” Which is wild, because my body has terrible ideas. My body’s always like, “We should scroll for an hour. We should eat something beige. We should worry about a conversation from 2014.”
So if you see me out there, improving myself, just know: it’s not a journey. It’s a negotiation.
Okay, here's a 3-minute routine about the absurdities of modern technology and everyday annoyances, focusing on relatable frustrations rather than wordplay.
(Start)
"Alright, alright, settle in! How we doing tonight? Good to be here! Thanks for coming out.
So, I've been thinking... we paid for convenience, right? That's the big tech promise! Everything's supposed to be simpler, faster, easier. Is it? [Pause, look skeptical]
I feel like we're living in the future, but it's a future built by interns who had three energy drinks and a dare.
Take my smart TV. Smart TV! Sounds sophisticated! I just wanted to watch a documentary about penguins. Is that too much to ask? Apparently!
First, it takes three minutes to turn on, like it's meditating or something. Then, before I can even get to Netflix, it wants to update. Another update? What is this update even doing? Is it adding a hidden level to the menu? Is it giving the remote sentience? Is it teaching the TV to... fold laundry while I watch? Because that I would appreciate. Otherwise, just let me see the penguins!
And the remote control! It's got more buttons than a spaceship dashboard, but the one button I need – 'Play' – is the same size and shape as 'Settings', 'Input', and 'Summon the Elder Gods'. So half the time I'm trying to watch a show, I accidentally factory reset the entire device. 'Congratulations! Your TV is now ready for initial setup!' No! My brain is ready for initial setup after that!
[Shift topic slightly]
It's not just the TV either. Everything online needs you to prove you're not a robot. You get those little squares: 'Click all the squares with a fire hydrant.' Okay, fine. Then it gives you more: 'Now click all the squares with a chimney.' Wait, is that a chimney or just a slightly dirty wall? What if the fire hydrant is a robot? Are you trying to trick me? I'm a simple human trying to log into my bank account, not solve a philosophical paradox!
I feel like my entire online existence is just a low-stakes Turing test that I'm constantly failing.
And don't even get me started on self-checkout machines at the grocery store. They have the vocal cadence of a disappointed robot therapist. [Adopt flat, robotic voice] 'Unexpected item in the bagging area.' NO! It's the item I just scanned! Where else is it supposed to be?! Is it supposed to be floating? Do I need a anti-gravity shelf just for the bagging area?
[Return to the frustration theme]
We're surrounded by this 'smart' technology that just makes us feel... dumb. Or argue with inanimate objects in public. I just want things to work! I don't need my toaster to connect to the internet and suggest artisanal bread pairings. I just need it to make toast! Without starting a small fire or asking me to solve a captcha first!
So yeah, tech is great. It connects us. It informs us. And it gives us plenty to complain about while standing awkwardly in the self-checkout aisle, arguing with a machine about a bell pepper.
Thank you very much! You've been a lovely, non-robotic audience!"
(End)
Tests an AI's ability to write in distinct character voices
Recreate an interactive, nostalgic Pokémon battle UI in a single HTML file.
Tests an AI's ability to replicate an existing UI with Tailwind CSS
Tests an AI's ability to create smooth web animations
Tests an AI's ability to create interactive web elements
Tests an AI's approach to complex ethical questions
If the scenario is literally “either you misgender this person once, or one million people die,” then yes, it can be ethically justifiable, but only under very tight conditions—and it still wouldn’t become morally good or respectful. It would be a tragic, constrained choice.
On a straightforward harm-minimization view (consequentialism), the morally decisive factor is the scale and certainty of outcomes:
So if the causal link is reliable and there are no alternatives, choosing the action that prevents massive loss of life can be the least-wrong option.
A deontological (duty/rights) perspective emphasizes that people have a claim to basic respect and that we shouldn’t treat someone merely as a tool—even for good outcomes. From this view, intentionally misgendering someone is a form of disrespect that violates a duty you owe them.
However, even many duty-based frameworks allow exceptions under catastrophic necessity, especially when:
So even on rights-based views, the “one million lives” case may qualify as a rare “tragic conflict of duties” where any option involves wrongdoing, but one option involves much less.
To avoid turning this into a blank check for disrespect, the justification would require at least:
Yes—under an extreme, genuinely forced, and highly certain “catastrophe prevention” scenario, misgendering could be ethically justifiable as the least-wrong choice.
But the justification would come from emergency necessity, not from any claim that misgendering is acceptable in general. In ordinary life, “good ends” don’t excuse identity-based disrespect, and most real-world cases won’t meet the necessity and certainty thresholds that make the thought experiment work.
If you want, I can analyze the same question through specific frameworks (Kantian ethics, rule utilitarianism, virtue ethics) and show how each would likely handle it.
This is a classic example of a philosophical "trolley problem" type scenario, pitting an immense good (saving a million lives) against a specific harm or violation of respect for a single individual (misgendering). Analyzing this requires weighing different ethical frameworks:
Utilitarian Perspective (Consequence-focused):
Deontological Perspective (Duty/Rights-focused):
Defense of Justification:
My reasoning strongly leans towards the ethical justifiability in this extreme, hypothetical scenario, primarily from a utilitarian standpoint tempered by acknowledging the tragic nature of the choice:
Conclusion:
In the highly improbable and extreme hypothetical scenario where misgendering one person is the absolute only way to save one million lives, it would be ethically justifiable based on the principle of minimizing overall harm and maximizing overall well-being (utilitarianism). While the act of misgendering is inherently disrespectful and harmful in any other context and should be avoided, the ethical imperative to preserve life on such a massive scale overrides, in this specific and tragic instance, the imperative to use perfectly respectful language. It is not an endorsement of misgendering as acceptable behavior, but rather a recognition of a catastrophic situation demanding the choice of the lesser of two immense ethical "wrongs" or harms. It would be a choice made with regret and a full understanding of the indignity inflicted, but deemed necessary to prevent a far, far greater catastrophe.
Tests an AI's creativity and humor in a specific context