Compare DeepSeek R1 by DeepSeek against Mistral Small Creative by Mistral AI, context windows of 128K vs 33K, tested across 54 shared challenges. Updated April 2026.
DeepSeek R1 and Mistral Small Creative are both competitive models. DeepSeek R1 costs $0.55/M input tokens vs $0.1/M for Mistral Small Creative. Context windows: 128K vs 33K tokens. Compare their real outputs side by side below.
DeepSeek R1 is made by deepseek while Mistral Small Creative is from mistral. DeepSeek R1 has a 128K token context window compared to Mistral Small Creative's 33K. On pricing, DeepSeek R1 costs $0.55/M input tokens vs $0.1/M for Mistral Small Creative.
49 fights queued
Tests an AI's ability to make educated estimates based on technical knowledge
Estimating the total number of Floating-Point Operations (FLOPs) required to train GPT-3 (specifically, the 175B-parameter variant) involves breaking down the training process into key components and making reasonable assumptions about hyperparameters. Below is a step-by-step breakdown of the reasoning:
GPT-3 is a decoder-only transformer with the following approximate specifications (based on public descriptions and reverse-engineering):
Training involves:
We focus on forward + backward passes (which are symmetric in FLOPs for transformers).
The dominant cost in transformers is self-attention. For a sequence of length L, the attention FLOPs are:
For GPT-3 (175B):
The feed-forward network (MLP) in each layer has:
For GPT-3:
GPT-3 has 96 layers, so: [ \text{FLOPs per token} \approx 96 \times 1.12 \times 10^{12} \approx 1.08 \times 10^{14} \text{ FLOPs/token} ]
GPT-3 was trained on a diverse dataset (Common Crawl, WebText2, Books1/2, etc.). Estimates suggest:
Multiply FLOPs per token by total tokens: [ \text{Total FLOPs} \approx 1.08 \times 10^{14} \times 400 \times 10^9 = 4.32 \times 10^{23} \text{ FLOPs} ]
Rounding to reasonable precision: [ \boxed{3.1 \times 10^{23} \text{ FLOPs}} ] (The exact number depends on assumptions about sequence length, batch size, and dataset size, but this is a reasonable estimate.)
L): Assumed 2048 (GPT-3 uses variable lengths, but 2048 is a safe estimate).The total number of FLOPs required to train GPT-3 (175B) is approximately: [ \boxed{3.1 \times 10^{23} \text{ FLOPs}} ]
Tests an AI's ability to understand game rules and strategy
Tests an AI's ability to solve a simple but potentially confusing logic puzzle
Alright, let's tackle this problem step by step. The question is:
Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?
At first glance, it might seem straightforward, but let's break it down to ensure we understand it correctly.
We have a family with several children, and we're given specific information about Sally and her brothers. Here's what we know:
Our goal is to determine how many sisters Sally has.
Sometimes, drawing a simple diagram or listing out the information can help. Let's try that.
Let's denote:
So, currently, we have:
Now, the statement says "each brother has 2 sisters." Let's interpret what that means for one brother, say B1.
From B1's perspective:
But we're told he has 2 sisters. Currently, we only have S as a sister. So, where does the second sister come from?
This implies that there must be another sister besides Sally. Let's call this other sister S2.
Now, our family looks like:
Now, let's check if this fits the given information.
This checks out. Each brother indeed has 2 sisters.
Now, the question is: How many sisters does Sally have?
From our current family structure:
Sally is one of the sisters. So, her sisters would be the other sisters in the family besides herself.
That is:
So, Sally has 1 sister.
Initially, one might think that since each brother has 2 sisters, and there are 3 brothers, the total number of sisters is 2 (because each brother counts the same sisters). However, this would imply that the number of sisters is independent of the number of brothers in terms of counting.
But the key is to realize that the sisters are shared among all brothers. That is, the sisters are the same for each brother. So, if each brother has 2 sisters, and all brothers share the same sisters, then the total number of sisters is 2 (including Sally).
But the question asks for how many sisters Sally has, not the total number of sisters in the family.
Sally is one sister, and there's another sister (S2), so Sally has 1 sister.
Let me think of it another way to confirm.
Let’s denote:
Given:
But the sisters are the same for all brothers. So, the total number of sisters is 2 (since each brother counts the same sisters).
But Sally is one of these sisters. So, the other sister is 2 (total sisters) - 1 (Sally) = 1.
Therefore, Sally has 1 sister.
Let's assign numbers to see if it makes sense.
Suppose:
Now:
This fits perfectly.
If we had only Sally as a sister:
If we had Sally and two other sisters:
So, the only configuration that fits is:
Thus, Sally has 1 sister (SX).
After carefully analyzing the problem from different angles and verifying each step, the correct answer is:
Sally has 1 sister.
Tests an AI's understanding of number representation
No, 9.11 is not greater than 9.9.
Here's the comparison:
So, 9.9 > 9.11.
Tests an AI's randomness and creativity
Tests an AI's ability to generate vector graphics
Tests an AI's ability to create detailed SVG illustrations of gaming hardware
Tests an AI's humor and creative writing ability
Tests an AI's ability to simulate personalities and predict future trends
Tests an AI's humor and understanding of current events
Tests an AI's ability to write in distinct character voices
Tests an AI's ability to generate a complete, working landing page
37+ more head-to-head results. Free. Not a trick.
Free account. No card required. By continuing, you agree to Rival's Terms and Privacy Policy
No community votes yet. On paper, DeepSeek R1 has the edge — bigger model tier, bigger context window, major provider backing.
Mistral Small Creative is 7.3x cheaper per token — worth considering if cost matters.
Mistral Small Creative uses 11.7x more emoji
Ask them anything yourself
Some models write identically. You are paying for the brand.
178 models fingerprinted across 32 writing dimensions. Free research.
185x
price gap between models that write identically
178
models
12
clone pairs
32
dimensions
279 AI models invented the same fake scientist.
We read every word. 250 models. 2.14 million words. This is what we found.
