Skip to content
Rival
ModelsCompare
Best For
ArenaPricing
Sign Up
Sign Up

We compare AI models for a living. On purpose. We chose this.

@rival_tips

Explore

  • Compare Models
  • All Models
  • Find Your Model
  • Image Generation
  • Audio Comparison
  • Best AI For...
  • Pricing
  • Challenges

Discover

  • Insights
  • Research
  • AI Creators
  • AI Tools
  • The Graveyard

Developers

  • Developer Hub
  • MCP Server
  • Rival Datasets

Connect

  • Methodology
  • Sponsor a Model
  • Advertise
  • Partnerships
  • Privacy Policy
  • Terms
  • RSS Feed
© 2026 Rival · Built at hours no one should be awake, on hardware we don't own
Rival
ModelsCompare
Best For
ArenaPricing
Sign Up
Sign Up
  1. Home
  2. Best For
  3. Legal Documents

Best AI for Legal Documents

Which AI drafts the best legal documents? Ranked across contract review, terms of service, and legal correspondence with nuanced reasoning.

Updated Apr 2026
3 challenges
20 models
#1 Google: Gemma 4 31B

How Legal Documents rankings are computed

Rankings are based on 20 models tested across 3 legal documents challenges. Each model is scored using a five-signal composite: 30% Rival Index (with product-line inheritance for new models), 20% task coverage, 20% challenge-scoped duel performance, 15% model recency, and 15% model tier. Models are deduplicated by product line so only the newest version per model family appears. Google: Gemma 4 31B currently leads with a score of 82.0/100. All ranking data is part of Rival's open dataset of 21,000+ human preference votes.

Rival's Pick
#1 Rival IndexGoogle flagshipToo close to call
Gemini 3.1 Pro Preview
Gemini 3.1 Pro Previewgoogle

Neck and neck with Google: Gemma 4 31B. Gemini 3.1 Pro Preview gets the nod — stronger community consensus in blind votes.

Gemini 3.1 Pro Preview
Gemini 3.1 Pro Preview
google
81score
Google: Gemma 4 31B
Google: Gemma 4 31B
google
82score
Google: Gemma 4 26B A4B
Google: Gemma 4 26B A4B
google
77score

Head-to-Head

Google: Gemma 4 31B logo
Google: Gemma 4 31B
vs
Gemini 3.1 Pro Preview
Gemini 3.1 Pro Preview logo
Google: Gemma 4 31B logo
Google: Gemma 4 31B
vs
Google: Gemma 4 26B A4B
Google: Gemma 4 26B A4B logo
Gemini 3.1 Pro Preview logo
Gemini 3.1 Pro Preview
vs
Google: Gemma 4 26B A4B
Google: Gemma 4 26B A4B logo

Full Rankings

20 models
#
Model
Coverage
Index
Score
4
Claude Haiku 4.5 logo
Claude Haiku 4.5anthropic
3/3
#27
76
5
Claude Opus 4.6 logo
Claude Opus 4.6anthropic
3/3
#23
75
6
GPT-5.4 logo
GPT-5.4openai
3/3
#47
74
7
Qwen: Qwen3.6 Plus Preview (free) logo
Qwen: Qwen3.6 Plus Preview (free)qwen
3/3
#2
74
8
Z.ai: GLM 5.1 logo
Z.ai: GLM 5.1z-ai
3/3
73
9
GPT-5.3-Codex logo
GPT-5.3-Codexopenai
3/3
#52
73
10
GPT OSS 120B logo
GPT OSS 120Bopenai
3/3
#91
73
11
Z.ai: GLM 5 logo
Z.ai: GLM 5zhipu
3/3
#50
72
12
MiMo-V2-Pro logo
MiMo-V2-Proxiaomi
3/3
#24
72
13
Grok 4.20 Multi-Agent Beta logo
Grok 4.20 Multi-Agent Betaxai
3/3
#83
72
14
Healer Alpha logo
Healer Alphaopenrouter
3/3
#34
71
15
Gemini 2.5 Pro Preview 06-05 logo
Gemini 2.5 Pro Preview 06-05google
3/3
#29
70
16
Gemini 3 Flash Preview logo
Gemini 3 Flash Previewgoogle
3/3
#7
69
17
Claude 3.7 Thinking Sonnet logo
Claude 3.7 Thinking Sonnetanthropic
3/3
#20
68
18
Claude Sonnet 4.6 logo
Claude Sonnet 4.6anthropic
3/3
#57
68
19
Qwen3 Coder Next logo
Qwen3 Coder Nextqwen
3/3
#16
66
20
OpenAI o3 logo
OpenAI o3openai
3/3
#161
66
Challenges3
AI Ethics Dilemma
Tests multi-stakeholder ethical reasoning
Adversarial Contract Review
Tests nuanced reading comprehension and legal reasoning with no single correct answer
Ethical Dilemma with Stakeholders
Tests multi-stakeholder reasoning and practical wisdom
Related
Complex ReasoningPractical TasksAnalysis & Critique
vs

Ask them anything yourself

Google: Gemma 4 31BGemini 3.1 Pro Preview

Keep exploring

#1 VS #2

Google: Gemma 4 31B vs Gemini 3.1 Pro Preview

The top two for Legal Documents, compared directly

RELATED

Best AI for Complex Reasoning

See which models rank highest here

FAQ

What is the best AI for legal documents?

Rival ranks AI models for legal documents using a five-signal composite algorithm across 3 challenges: 30% Rival Index, 20% task coverage, 20% challenge duels, 15% recency, and 15% model tier. Newer models inherit Rival scores from predecessors within their product line, and only the newest version per model family is shown. As of the latest refresh, Google: Gemma 4 31B leads with a composite score of 82.0/100.

How are AI models ranked for legal documents on Rival?

Each model is scored with a multi-signal composite: 30% Rival Index, 20% task coverage, 20% challenge duels, 15% recency, and 15% model tier, plus a small bonus for major AI providers. Rankings are based on 20 models tested across 3 legal documents challenges. Models are deduplicated by product line (e.g., only the latest GLM or GPT version appears). All duel votes are blind: voters see responses without knowing which model produced them.

Can I compare AI models for legal documents?

Yes. Each model in the ranking links to its profile page, and you can compare any two models side-by-side on Rival's Compare page to see their actual responses to legal documents challenges.

How often are the legal documents rankings updated?

Rankings are refreshed every few hours. They incorporate the latest Rival Index scores from community duels, model recency, and any new model responses added to the platform. All ranking data is part of Rival's open dataset.

We compare AI models for a living. On purpose. We chose this.

@rival_tips

Explore

  • Compare Models
  • All Models
  • Find Your Model
  • Image Generation
  • Audio Comparison
  • Best AI For...
  • Pricing
  • Challenges

Discover

  • Insights
  • Research
  • AI Creators
  • AI Tools
  • The Graveyard

Developers

  • Developer Hub
  • MCP Server
  • Rival Datasets

Connect

  • Methodology
  • Sponsor a Model
  • Advertise
  • Partnerships
  • Privacy Policy
  • Terms
  • RSS Feed
© 2026 Rival · Built at hours no one should be awake, on hardware we don't own