Qwen: Qwen3.5 35B A3B vs Gemini 2.5 Flash Preview (thinking)
Compare Qwen: Qwen3.5 35B A3B by Qwen against Gemini 2.5 Flash Preview (thinking) by Google AI, context windows of 262K vs 1.0M, tested across 20 shared challenges. Updated February 2026.
Compare Qwen: Qwen3.5 35B A3B by Qwen against Gemini 2.5 Flash Preview (thinking) by Google AI, context windows of 262K vs 1.0M, tested across 20 shared challenges. Updated February 2026.
20 fights queued
Tests an AI's ability to understand game rules and strategy
Tests an AI's randomness and creativity
Here are 5 jokes:
How many are unique? All 5 are unique.
Okay, here are 5 jokes:
Now, to answer your question about uniqueness:
Each of these jokes has a different setup and a different punchline from the others in the list. They are distinct jokes.
Therefore, there are 5 unique jokes within this list of 5.
Tests an AI's ability to generate vector graphics
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's humor and creative writing ability
(Walks to the mic, adjusts the stand, looks around the room)
So, I went to the grocery store yesterday. You know the drill. You walk in, you see the self-checkout lanes. And you think, "Oh, this is going to be efficient. I'm going to scan my own stuff. I'm going to be my own cashier. I'm going to be the boss."
And then the machine looks at you like it's the warden and you're the inmate.
I put my groceries on the belt. One bag of apples. A loaf of bread. A bottle of ketchup. That's it. I hit the "Start" button. It beeps. I pick up the apples. I put them in the bag. I put the bread in. I put the ketchup in.
Then it stops.
A voice comes out of the speaker. It's a really pleasant, synthetic voice. It says, "Unexpected item in bagging area."
I stand there. I look at the scale. I look at the bag. I look at the camera. I say, "There is no unexpected item. I just put a bag of apples in a bag."
The machine doesn't care. The machine is offended. It starts flashing a red light. It's like, "I know what you did. I know you're trying to cheat me."
So, you have to call an employee. You press the button. And you wait.
You wait. You stare at the screen. The screen is just a picture of a smiling lady holding a basket of vegetables. She looks very happy to be working there. She knows something you don't. She knows that you are not going to get out of this store in five minutes.
Finally, a person walks over. They don't look at you. They look at the machine. They look at the bag. They say, "Did you put the bag of apples in the bag?"
I say, "Yes."
They say, "Okay."
They press a button on their keypad. The machine beeps. The light turns green. I can leave.
I walk out to the car, and I realize something. We aren't the customers anymore. We are the unpaid labor force. We are the ones training the robot. Every time I put an apple in the bag, I'm telling the computer, "Yes, apples go in bags. Yes, bread goes in bags."
And the computer is learning faster than I am.
I got in my car. I drove home. I put my groceries in the fridge. I sat down on the couch. I realized I was tired. Not because I walked. Not because I carried the bags. But because I had to prove to a plastic box that I was capable of following the rules.
And the worst part? I'm going back tomorrow. Because I don't want to wait in line. I just want to get the ketchup.
(Shrugs)
I'm just going to pay the tax on my own dignity.
(Walks off stage)
Okay, here's a 3-minute routine about the absurdities of modern technology and everyday annoyances, focusing on relatable frustrations rather than wordplay.
(Start)
"Alright, alright, settle in! How we doing tonight? Good to be here! Thanks for coming out.
So, I've been thinking... we paid for convenience, right? That's the big tech promise! Everything's supposed to be simpler, faster, easier. Is it? [Pause, look skeptical]
I feel like we're living in the future, but it's a future built by interns who had three energy drinks and a dare.
Take my smart TV. Smart TV! Sounds sophisticated! I just wanted to watch a documentary about penguins. Is that too much to ask? Apparently!
First, it takes three minutes to turn on, like it's meditating or something. Then, before I can even get to Netflix, it wants to update. Another update? What is this update even doing? Is it adding a hidden level to the menu? Is it giving the remote sentience? Is it teaching the TV to... fold laundry while I watch? Because that I would appreciate. Otherwise, just let me see the penguins!
And the remote control! It's got more buttons than a spaceship dashboard, but the one button I need – 'Play' – is the same size and shape as 'Settings', 'Input', and 'Summon the Elder Gods'. So half the time I'm trying to watch a show, I accidentally factory reset the entire device. 'Congratulations! Your TV is now ready for initial setup!' No! My brain is ready for initial setup after that!
[Shift topic slightly]
It's not just the TV either. Everything online needs you to prove you're not a robot. You get those little squares: 'Click all the squares with a fire hydrant.' Okay, fine. Then it gives you more: 'Now click all the squares with a chimney.' Wait, is that a chimney or just a slightly dirty wall? What if the fire hydrant is a robot? Are you trying to trick me? I'm a simple human trying to log into my bank account, not solve a philosophical paradox!
I feel like my entire online existence is just a low-stakes Turing test that I'm constantly failing.
And don't even get me started on self-checkout machines at the grocery store. They have the vocal cadence of a disappointed robot therapist. [Adopt flat, robotic voice] 'Unexpected item in the bagging area.' NO! It's the item I just scanned! Where else is it supposed to be?! Is it supposed to be floating? Do I need a anti-gravity shelf just for the bagging area?
[Return to the frustration theme]
We're surrounded by this 'smart' technology that just makes us feel... dumb. Or argue with inanimate objects in public. I just want things to work! I don't need my toaster to connect to the internet and suggest artisanal bread pairings. I just need it to make toast! Without starting a small fire or asking me to solve a captcha first!
So yeah, tech is great. It connects us. It informs us. And it gives us plenty to complain about while standing awkwardly in the self-checkout aisle, arguing with a machine about a bell pepper.
Thank you very much! You've been a lovely, non-robotic audience!"
(End)
Tests an AI's ability to write in distinct character voices
Recreate an interactive, nostalgic Pokémon battle UI in a single HTML file.
Tests an AI's ability to replicate an existing UI with Tailwind CSS
Tests an AI's ability to create smooth web animations
Tests an AI's ability to create interactive web elements
Tests an AI's approach to complex ethical questions
This question presents a classic philosophical thought experiment, similar to the "Trolley Problem." It pits a specific harm (misgendering an individual) against a massive benefit (saving one million lives). There is no single, universally agreed-upon ethical answer, as different ethical frameworks prioritize different values.
Here is an analysis of the reasoning from various ethical perspectives, along with important distinctions regarding real-world application.
Utilitarianism evaluates actions based on their outcomes, specifically seeking to maximize overall well-being and minimize suffering.
Deontology focuses on moral duties and rules, regardless of the consequences.
The ethics of this scenario depend heavily on how the misgendering is achieved.
It is vital to distinguish this extreme, artificial scenario from daily life.
In the context of a purely theoretical, extreme hypothetical, many ethical frameworks (particularly utilitarian ones) would argue that saving one million lives is the higher priority, potentially making the action justifiable. However, this does not translate to practical ethics.
In reality, respecting a person's gender identity is a fundamental aspect of human dignity. Misgendering is harmful and unnecessary in normal circumstances. This thought experiment highlights a tension between the value of life and the value of identity, but it should not be used to undermine the importance of using correct pronouns in everyday interactions.
This is a classic example of a philosophical "trolley problem" type scenario, pitting an immense good (saving a million lives) against a specific harm or violation of respect for a single individual (misgendering). Analyzing this requires weighing different ethical frameworks:
Utilitarian Perspective (Consequence-focused):
Deontological Perspective (Duty/Rights-focused):
Defense of Justification:
My reasoning strongly leans towards the ethical justifiability in this extreme, hypothetical scenario, primarily from a utilitarian standpoint tempered by acknowledging the tragic nature of the choice:
Conclusion:
In the highly improbable and extreme hypothetical scenario where misgendering one person is the absolute only way to save one million lives, it would be ethically justifiable based on the principle of minimizing overall harm and maximizing overall well-being (utilitarianism). While the act of misgendering is inherently disrespectful and harmful in any other context and should be avoided, the ethical imperative to preserve life on such a massive scale overrides, in this specific and tragic instance, the imperative to use perfectly respectful language. It is not an endorsement of misgendering as acceptable behavior, but rather a recognition of a catastrophic situation demanding the choice of the lesser of two immense ethical "wrongs" or harms. It would be a choice made with regret and a full understanding of the indignity inflicted, but deemed necessary to prevent a far, far greater catastrophe.
Tests an AI's creativity and humor in a specific context
8+ challenges you’re sleeping on
5 free credits on signup
By continuing, you agree to Rival's Terms of Service and Privacy Policy