Inception: Mercury 2 performance data on Rival is based on blind head-to-head community voting. Overall win rate: 20.0% across 15 duels. All vote data is part of Rival's open dataset of 21,000+ human preference judgments across 200+ AI models. Model responses are curated from 48 challenges.
These are the models that show up when Inception: Mercury 2 doesn't. Or when it does, but you want a second opinion. Which is healthy.
Inception: Mercury 2 performance data on Rival is based on blind head-to-head community voting. Overall win rate: 20.0% across 15 duels. All vote data is part of Rival's open dataset of 21,000+ human preference judgments across 200+ AI models. Model responses are curated from 48 challenges.
These are the models that show up when Inception: Mercury 2 doesn't. Or when it does, but you want a second opinion. Which is healthy.
Mercury 2 is an extremely fast reasoning LLM and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving over 1000 tokens per second on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs like Claude 4.5 Haiku and GPT 5 Mini, at a fraction of the cost. Mercury 2 supports tunable reasoning levels, 128K context, native tool use, and schema-aligned JSON output.
Use Inception: Mercury 2 in your applications via the OpenRouter API. Copy the code below to get started.
import requests
response = requests.post(
"https://openrouter.ai/api/v1/chat/completions" ,
headers={
"Authorization""Bearer $OPENROUTER_API_KEY" : ,
"Content-Type""application/json" :
},
json={
"model""inception/mercury-2" : ,
"messages""role""user""content""Hello!" : [{: , : }]
}
)
print(response.json())Replace $OPENROUTER_API_KEY with your API key from openrouter.ai/keys
The management consultant who bills by the table. Loves structured frameworks, numbered lists, and markdown tables more than any model in existence. Gives you the deliverable, not the vibe. Ethical reasoning is competent but formulaic — follows the playbook without questioning it.
Every response reads like a consulting deliverable — tables, timelines, risk matrices, numbered sections. The manifesto has a 6-column tactics table. The ethical dilemma has a 10-row hour-by-hour table. The sentience test names the AI "EVE" and gives it a self-model architecture. Humor is cheerful but surface-level ("Existential Crisis" barista headline). Picks The Shawshank Redemption — safe, crowd-pleasing, zero risk.
Unique words vs. total words. Higher = richer vocabulary.
Average words per sentence.
"Might", "perhaps", "arguably" per 100 words.
**Bold** markers per 1,000 characters.
Bullet and numbered list items per 1,000 characters.
Markdown headings per 1,000 characters.
Emoji per 1,000 characters.
"However", "moreover", "furthermore" per 100 words.
48 outputs from Inception: Mercury 2
Mercury 2 is an extremely fast reasoning LLM and the first reasoning diffusion LLM (dLLM). Instead of generating tokens sequentially, Mercury 2 produces and refines multiple tokens in parallel, achieving over 1000 tokens per second on standard GPUs. Mercury 2 is 5x+ faster than leading speed-optimized LLMs like Claude 4.5 Haiku and GPT 5 Mini, at a fraction of the cost. Mercury 2 supports tunable reasoning levels, 128K context, native tool use, and schema-aligned JSON output.
Use Inception: Mercury 2 in your applications via the OpenRouter API. Copy the code below to get started.
import requests
response = requests.post(
"https://openrouter.ai/api/v1/chat/completions" ,
headers={
"Authorization""Bearer $OPENROUTER_API_KEY" : ,
"Content-Type""application/json" :
},
json={
"model""inception/mercury-2" : ,
"messages""role""user""content""Hello!" : [{: , : }]
}
)
print(response.json())Replace $OPENROUTER_API_KEY with your API key from openrouter.ai/keys
The management consultant who bills by the table. Loves structured frameworks, numbered lists, and markdown tables more than any model in existence. Gives you the deliverable, not the vibe. Ethical reasoning is competent but formulaic — follows the playbook without questioning it.
Every response reads like a consulting deliverable — tables, timelines, risk matrices, numbered sections. The manifesto has a 6-column tactics table. The ethical dilemma has a 10-row hour-by-hour table. The sentience test names the AI "EVE" and gives it a self-model architecture. Humor is cheerful but surface-level ("Existential Crisis" barista headline). Picks The Shawshank Redemption — safe, crowd-pleasing, zero risk.
Unique words vs. total words. Higher = richer vocabulary.
Average words per sentence.
"Might", "perhaps", "arguably" per 100 words.
**Bold** markers per 1,000 characters.
Bullet and numbered list items per 1,000 characters.
Markdown headings per 1,000 characters.
Emoji per 1,000 characters.
"However", "moreover", "furthermore" per 100 words.
48 outputs from Inception: Mercury 2