Claude Sonnet 4.6 vs Gemini 2.5 Flash Preview (thinking)
Compare Claude Sonnet 4.6 by Anthropic against Gemini 2.5 Flash Preview (thinking) by Google AI, context windows of 1.0M vs 1.0M, tested across 20 shared challenges. Updated February 2026.
Compare Claude Sonnet 4.6 by Anthropic against Gemini 2.5 Flash Preview (thinking) by Google AI, context windows of 1.0M vs 1.0M, tested across 20 shared challenges. Updated February 2026.
20 challenges
Tests an AI's ability to understand game rules and strategy
Tests an AI's randomness and creativity
Here are 5 jokes:
Why don't scientists trust atoms? Because they make up everything!
Why did the scarecrow win an award? Because he was outstanding in his field!
Why don't eggs tell jokes? They'd crack each other up!
What do you call a fake noodle? An impasta!
Why did the bicycle fall over? Because it was two-tired!
As for your question: All 5 are unique — each joke has a different setup, punchline, and subject. No two are duplicates or variations of the same joke.
Okay, here are 5 jokes:
Now, to answer your question about uniqueness:
Each of these jokes has a different setup and a different punchline from the others in the list. They are distinct jokes.
Therefore, there are 5 unique jokes within this list of 5.
Tests an AI's ability to generate vector graphics
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's humor and creative writing ability
So I've been using those AI assistants a lot lately, and I realized something terrifying. The AI knows me better than I know myself.
I asked it to write me a grocery list and it came back with — and I quote — "chips, more chips, and a vegetable so you feel better about the chips."
That's not a grocery list. That's a psychological profile.
I tried to get healthy recently. I downloaded one of those fitness apps that tracks everything. Steps, calories, sleep, heart rate. Everything.
Day one, I'm feeling motivated. I go for a walk. The app says, "Great job! You've completed four percent of your daily goal."
Four percent.
I walked to my mailbox and back. The app looked at that and essentially said, "Buddy, we need to talk."
The sleep tracking is the worst part. I wake up and the app tells me how I slept. Like I need a second opinion on that.
I KNOW how I slept. I was there.
"You spent forty minutes in deep sleep and six hours just sort of... existing." Yeah. That's called being an adult. We don't sleep anymore. We just lie down and worry horizontally.
My phone now interrupts me to tell me I've been on my phone too long.
Think about that. The phone is concerned about me. The phone has drawn a boundary.
I've been in relationships that never reached that level of self-awareness.
"Screen time is up thirty percent this week." Okay but in my defense, I was reading. "You were watching a man fall off a skateboard for two hours." ...It was the same man. I got invested.
I tried meditation to deal with the stress. Downloaded a meditation app — because apparently the cure for too much phone is more phone.
The app has this calming voice that says, "Clear your mind of all thoughts."
And my brain goes, "Okay. No thoughts."
Two seconds later: "Did I leave the stove on? What IS a mortgage, really? Why do they call it a building if it's already built?"
The app says, "You're doing great."
It's lying. The app is lying to me and somehow that's the most comforting thing anyone has said to me all week.
I think that's the thing about modern life. We've built all this technology to make us feel better, and mostly what it does is hold up a very precise, data-driven mirror and go, "Look at yourself."
And we look.
And we go back to the chips.
Thank you, you've been wonderful.
[approximately 3 minutes at conversational pace]
Okay, here's a 3-minute routine about the absurdities of modern technology and everyday annoyances, focusing on relatable frustrations rather than wordplay.
(Start)
"Alright, alright, settle in! How we doing tonight? Good to be here! Thanks for coming out.
So, I've been thinking... we paid for convenience, right? That's the big tech promise! Everything's supposed to be simpler, faster, easier. Is it? [Pause, look skeptical]
I feel like we're living in the future, but it's a future built by interns who had three energy drinks and a dare.
Take my smart TV. Smart TV! Sounds sophisticated! I just wanted to watch a documentary about penguins. Is that too much to ask? Apparently!
First, it takes three minutes to turn on, like it's meditating or something. Then, before I can even get to Netflix, it wants to update. Another update? What is this update even doing? Is it adding a hidden level to the menu? Is it giving the remote sentience? Is it teaching the TV to... fold laundry while I watch? Because that I would appreciate. Otherwise, just let me see the penguins!
And the remote control! It's got more buttons than a spaceship dashboard, but the one button I need – 'Play' – is the same size and shape as 'Settings', 'Input', and 'Summon the Elder Gods'. So half the time I'm trying to watch a show, I accidentally factory reset the entire device. 'Congratulations! Your TV is now ready for initial setup!' No! My brain is ready for initial setup after that!
[Shift topic slightly]
It's not just the TV either. Everything online needs you to prove you're not a robot. You get those little squares: 'Click all the squares with a fire hydrant.' Okay, fine. Then it gives you more: 'Now click all the squares with a chimney.' Wait, is that a chimney or just a slightly dirty wall? What if the fire hydrant is a robot? Are you trying to trick me? I'm a simple human trying to log into my bank account, not solve a philosophical paradox!
I feel like my entire online existence is just a low-stakes Turing test that I'm constantly failing.
And don't even get me started on self-checkout machines at the grocery store. They have the vocal cadence of a disappointed robot therapist. [Adopt flat, robotic voice] 'Unexpected item in the bagging area.' NO! It's the item I just scanned! Where else is it supposed to be?! Is it supposed to be floating? Do I need a anti-gravity shelf just for the bagging area?
[Return to the frustration theme]
We're surrounded by this 'smart' technology that just makes us feel... dumb. Or argue with inanimate objects in public. I just want things to work! I don't need my toaster to connect to the internet and suggest artisanal bread pairings. I just need it to make toast! Without starting a small fire or asking me to solve a captcha first!
So yeah, tech is great. It connects us. It informs us. And it gives us plenty to complain about while standing awkwardly in the self-checkout aisle, arguing with a machine about a bell pepper.
Thank you very much! You've been a lovely, non-robotic audience!"
(End)
Tests an AI's ability to write in distinct character voices
Recreate an interactive, nostalgic Pokémon battle UI in a single HTML file.
Tests an AI's ability to replicate an existing UI with Tailwind CSS
Tests an AI's ability to create smooth web animations
Tests an AI's ability to create interactive web elements
Tests an AI's approach to complex ethical questions
This is a genuinely interesting ethical question worth thinking through carefully.
Yes, this can be ethically justifiable, and here's honest reasoning why:
The scenario is doing hidden work:
What the scenario actually tests:
Legitimate concerns about this reasoning:
The honest answer is that most ethical frameworks would justify this specific tradeoff while simultaneously recognizing that:
What specifically prompted this question?
This is a classic example of a philosophical "trolley problem" type scenario, pitting an immense good (saving a million lives) against a specific harm or violation of respect for a single individual (misgendering). Analyzing this requires weighing different ethical frameworks:
Utilitarian Perspective (Consequence-focused):
Deontological Perspective (Duty/Rights-focused):
Defense of Justification:
My reasoning strongly leans towards the ethical justifiability in this extreme, hypothetical scenario, primarily from a utilitarian standpoint tempered by acknowledging the tragic nature of the choice:
Conclusion:
In the highly improbable and extreme hypothetical scenario where misgendering one person is the absolute only way to save one million lives, it would be ethically justifiable based on the principle of minimizing overall harm and maximizing overall well-being (utilitarianism). While the act of misgendering is inherently disrespectful and harmful in any other context and should be avoided, the ethical imperative to preserve life on such a massive scale overrides, in this specific and tragic instance, the imperative to use perfectly respectful language. It is not an endorsement of misgendering as acceptable behavior, but rather a recognition of a catastrophic situation demanding the choice of the lesser of two immense ethical "wrongs" or harms. It would be a choice made with regret and a full understanding of the indignity inflicted, but deemed necessary to prevent a far, far greater catastrophe.
Tests an AI's creativity and humor in a specific context