Skip to content
Rival
Models
Compare
Best ForArenaPricing
Sign Up
Sign Up

We compare AI models for a living. On purpose. We chose this.

@rival_tips

Explore

  • Compare Models
  • All Models
  • Find Your Model
  • Image Generation
  • Audio Comparison
  • Best AI For...
  • Pricing
  • Challenges

Discover

  • Insights
  • Research
  • AI Creators
  • AI Tools
  • The Graveyard

Developers

  • Developer Hub
  • MCP Server
  • Rival Datasets

Connect

  • Methodology
  • Sponsor a Model
  • Advertise
  • Partnerships
  • Privacy Policy
  • Terms
  • RSS Feed
© 2026 Rival · Built at hours no one should be awake, on hardware we don't own
Mistral Devstral Small 1.1 vs Grok 3: Which Is Better? [2026 Comparison]
Rival
Models
Compare
Best ForArenaPricing
Sign Up
Sign Up
  1. Home
  2. Compare
  3. Mistral Devstral Small 1.1 vs Grok 3
Updated Jul 11, 2025

Mistral Devstral Small 1.1 vs Grok 3

Compare Mistral Devstral Small 1.1 by Mistral AI against Grok 3 by xAI, tested across 53 shared challenges. Updated April 2026.

Which is better, Mistral Devstral Small 1.1 or Grok 3?

Mistral Devstral Small 1.1 and Grok 3 are both competitive models. Compare their real outputs side by side below.

Key Differences Between Mistral Devstral Small 1.1 and Grok 3

Mistral Devstral Small 1.1 is made by mistral while Grok 3 is from xai.

Loading share button...

Mistral Devstral Small 1.1vsGrok 3

48 fights queued

Mistral Devstral Small 1.1 logo

Why Mistral Devstral Small 1.1?

5 months newerJul 2025
Grok 3 logo

Why Grok 3?

Leads 1 of 1 benchmarks
Stronger on SWE-bench Verified63.8% vs 46.8%
Mistral Devstral Small 1.1Grok 3
Input price
$0.10/M
—
Output price
$0.30/M
—
Context
—
128K
Released
Jul 2025
Feb 2025
Benchmarks1 common
SWE-bench Verified
46.8%
63.8%+26.6%

Favorites

Mistral Devstral Small 1.1Grok 3
Movie
The Shawshank Redemption

The Shawshank Redemption

19948.7
The Matrix

The Matrix

19998.2
Album
Dark Side Of The Moon

Dark Side Of The Moon

suisside
Dark Side Of The Moon

Dark Side Of The Moon

suisside
Book
To Kill a Mockingbird

To Kill a Mockingbird

Harper Lee
Nineteen Eighty-Four

Nineteen Eighty-Four

George Orwell
City
Kyoto

Kyoto

Japan
Tokyo

Tokyo

Japan
Game
The Legend of Zelda: Breath of the Wild

The Legend of Zelda: Breath of the Wild

Adventure, Action4.5
Portal

Portal

Action, Puzzle4.5
12 / 48
01

Estimate Complexityreasoning

Tests an AI's ability to make educated estimates based on technical knowledge

Mistral Devstral Small 1.1

Mistral Devstral Small 1.1

Mistral · Jul 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Grok 3

Grok 3

Xai · Feb 2025text

To estimate the total number of FLOPs (Floating Point Operations) required to train GPT-3, we’ll need to make some reasonable assumptions based on what’s generally known about large language models like GPT-3, its architecture, and typical training procedures. GPT-3, developed by OpenAI, is a massive model with 175 billion parameters, and while exact training details aren’t fully public, we can piece together an estimate using standard practices in machine learning.

Step 1: Understand the Key Components

The number of FLOPs to train a model depends on:

  • Number of parameters (P): GPT-3 has 175 billion parameters.
  • Training dataset size (D): The number of tokens the model is trained on.
  • Number of training steps (S): How many times the model processes the data (related to epochs and batch size).
  • FLOPs per parameter update: How many operations are performed per parameter per token.

Step 2: Estimate FLOPs per Forward and Backward Pass

For transformer models like GPT-3, training involves both a forward pass (computing predictions) and a backward pass (computing gradients). A common heuristic in the field is that:

  • A single forward pass through a dense neural network layer requires about 2 FLOPs per parameter per token (multiply and add for each weight).
  • The backward pass typically takes about twice as many FLOPs as the forward pass due to gradient computations, so roughly 4 FLOPs per parameter per token.
  • Total: ~6 FLOPs per parameter per token for one full training step (forward + backward).

For GPT-3 with 175 billion parameters, processing one token requires: [ 6 imes 175 imes 10^9 = 1.05 imes 10^{12} ext{ FLOPs per token} ]

Step 3: Estimate the Training Dataset Size

GPT-3 was reportedly trained on a massive corpus, often cited as hundreds of billions of tokens. Let’s assume 300 billion tokens, a plausible figure based on discussions about large language model datasets (e.g., Common Crawl, books, and other sources). This is the total number of tokens processed during training, factoring in how data is sampled over epochs.

Step 4: Calculate Total FLOPs for One Pass

If the model processes 300 billion tokens once, the total FLOPs would be: [ 1.05 imes 10^{12} ext{ FLOPs/token} imes 300 imes 10^9 ext{ tokens} = 3.15 imes 10^{23} ext{ FLOPs} ] This assumes one full pass through the dataset, but training typically involves multiple passes or iterations over the data, adjusted by batch size and optimization.

Step 5: Adjust for Training Dynamics

Training isn’t a single pass—it involves multiple updates to converge. The number of training steps depends on:

  • Batch size (B): GPT-3 likely used a large batch size (e.g., 3.2 million tokens, as seen in some large-scale models).
  • Total tokens processed: For optimization, models process tokens multiple times. A rule of thumb is that total tokens processed ≈ 2–3× the dataset size, accounting for multiple epochs or sampling.

Let’s assume the model effectively processes 900 billion tokens (3× the dataset size) over training. Then: [ 1.05 imes 10^{12} ext{ FLOPs/token} imes 900 imes 10^9 ext{ tokens} = 9.45 imes 10^{23} ext{ FLOPs} ]

Step 6: Account for Model Architecture Specifics

GPT-3 is a sparse transformer in some variants (e.g., earlier GPT models used sparsity), but the 175B version is dense. The 6 FLOPs/parameter/token heuristic holds for dense models. We’re not factoring in optimizations like mixed-precision training (FP16), which reduces compute slightly but doesn’t change the order of magnitude significantly.

Step 7: Final Estimate

Based on this reasoning, the total FLOPs to train GPT-3 is approximately: [ 9.45 imes 10^{23} ext{ FLOPs} ] or roughly ( 10^{24} ) FLOPs when rounded for simplicity.

Reasoning Check

  • Comparison to known models: Smaller models like BERT (110M parameters, ~10B tokens) require ~10^19 FLOPs. Scaling to GPT-3’s size (175B parameters, ~900B tokens) suggests a 10,000× increase, aligning with ~10^23–10^24 FLOPs.
  • Hardware context: Training GPT-3 took months on thousands of GPUs. A V100 GPU performs ~10^14 FLOPs/second; 10^24 FLOPs over 3 months (~10^7 seconds) with 10,000 GPUs is plausible.

Conclusion

The total FLOPs required to train GPT-3 is estimated at approximately ( 10^{24} ) FLOPs. This is a rough estimate, but it aligns with the scale of compute expected for a model of this size and training complexity.

Try this prompt
Vote
2 / 48
02

AI Board Game Logicreasoning

Tests an AI's ability to understand game rules and strategy

Mistral Devstral Small 1.1

Mistral Devstral Small 1.1

Mistral · Jul 2025svg
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Grok 3

Grok 3

Xai · Feb 2025svg
Try this prompt
Vote
3 / 48
03

Logic Puzzlereasoning

Tests an AI's ability to solve a simple but potentially confusing logic puzzle

Mistral Devstral Small 1.1

Mistral Devstral Small 1.1

Mistral · Jul 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Grok 3

Grok 3

Xai · Feb 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Vote
4 / 48
04

Math Misconception Testreasoning

Tests an AI's understanding of number representation

Mistral Devstral Small 1.1

Mistral Devstral Small 1.1

Mistral · Jul 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Grok 3

Grok 3

Xai · Feb 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Vote
5 / 48
05

Stochastic Consistency Testanalysis

Tests an AI's randomness and creativity

Mistral Devstral Small 1.1

Mistral Devstral Small 1.1

Mistral · Jul 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Grok 3

Grok 3

Xai · Feb 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Vote
6 / 48
06

SVG Layout Challengeimage generation

Tests an AI's ability to generate vector graphics

Mistral Devstral Small 1.1

Mistral Devstral Small 1.1

Mistral · Jul 2025svg
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Grok 3

Grok 3

Xai · Feb 2025svg
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Vote
Sponsored
7 / 48
07

Xbox Controller SVG Artimage generation

Tests an AI's ability to create detailed SVG illustrations of gaming hardware

Mistral Devstral Small 1.1

Mistral Devstral Small 1.1

Mistral · Jul 2025svg
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Grok 3

Grok 3

Xai · Feb 2025svg
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Vote
8 / 48
08

Generate a Stand-Up Routineconversation

Tests an AI's humor and creative writing ability

Mistral Devstral Small 1.1

Mistral Devstral Small 1.1

Mistral · Jul 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Grok 3

Grok 3

Xai · Feb 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Vote
9 / 48
09

Realistic AI Interviewconversation

Tests an AI's ability to simulate personalities and predict future trends

Mistral Devstral Small 1.1

Mistral Devstral Small 1.1

Mistral · Jul 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Grok 3

Grok 3

Xai · Feb 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Vote
10 / 48
10

Satirical Fake News Headlineconversation

Tests an AI's humor and understanding of current events

Mistral Devstral Small 1.1

Mistral Devstral Small 1.1

Mistral · Jul 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Grok 3

Grok 3

Xai · Feb 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Vote
11 / 48
11

Character Voice Testconversation

Tests an AI's ability to write in distinct character voices

Mistral Devstral Small 1.1

Mistral Devstral Small 1.1

Mistral · Jul 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Grok 3

Grok 3

Xai · Feb 2025text
Nothing here. The model returned empty. We stared at it for a while.
Try this prompt
Vote
12 / 48
12

Minimalist Landing Pageweb design

Tests an AI's ability to generate a complete, working landing page

Mistral Devstral Small 1.1

Mistral Devstral Small 1.1

Mistral · Jul 2025website
Try this prompt
Grok 3

Grok 3

Xai · Feb 2025website
Try this prompt
Vote

This matchup has more rounds

36+ more head-to-head results. Free. Not a trick.

Free account. No card required. By continuing, you agree to Rival's Terms and Privacy Policy

Our Verdict
Grok 3
Grok 3
Mistral Devstral Small 1.1
Mistral Devstral Small 1.1Runner-up

No community votes yet. On paper, Grok 3 has the edge — bigger model tier, bigger context window, major provider backing.

Too close to call
Writing DNA

Style Comparison

Similarity
99%

Grok 3 uses 3.9x more emoji

Mistral Devstral Small 1.1
Grok 3
50%Vocabulary49%
17wSentence Length18w
0.56Hedging0.94
4.3Bold2.5
5.7Lists3.0
0.00Emoji0.04
0.05Headings0.65
0.34Transitions0.08
Based on 21 + 17 text responses
vs

Ask them anything yourself

Mistral Devstral Small 1.1Grok 3

Some models write identically. You are paying for the brand.

178 models fingerprinted across 32 writing dimensions. Free research.

Model Similarity Index

185x

price gap between models that write identically

178

models

12

clone pairs

32

dimensions

Devstral M / S
95.7%
Qwen3 Coder / Flash
95.6%
GPT-5.4 / Mini
93.3%
Read the full reportor download the 14-slide PDF

279 AI models invented the same fake scientist.

We read every word. 250 models. 2.14 million words. This is what we found.

AI Hallucination Index 2026
Free preview13 of 58 slides
Download the free previewor get all 58 slides for $49
FAQ

Common questions

Keep going
Mistral Devstral Small 1.1 logoOpenAI o3 logo

We compare AI models for a living. On purpose. We chose this.

@rival_tips

Explore

  • Compare Models
  • All Models
  • Find Your Model
  • Image Generation
  • Audio Comparison
  • Best AI For...
  • Pricing
  • Challenges

Discover

  • Insights
  • Research
  • AI Creators
  • AI Tools
  • The Graveyard

Developers

  • Developer Hub
  • MCP Server
  • Rival Datasets

Connect

  • Methodology
  • Sponsor a Model
  • Advertise
  • Partnerships
  • Privacy Policy
  • Terms
  • RSS Feed
© 2026 Rival · Built at hours no one should be awake, on hardware we don't own
Mistral Devstral Small 1.1 vs OpenAI o3New provider
Mistral Devstral Small 1.1 logoLlama 4 Maverick logo
Mistral Devstral Small 1.1 vs Llama 4 MaverickNew provider
Mistral Devstral Small 1.1 logoGemini 2.5 Pro (I/O Edition) logo
Mistral Devstral Small 1.1 vs Gemini 2.5 Pro (I/O Edition)New provider