DeepSeek R1 performance data on Rival is based on blind head-to-head community voting. Overall win rate: 40.4% across 423 duels. All vote data is part of Rival's open dataset of 21,000+ human preference judgments across 200+ AI models. Model responses are curated from 54 challenges.
We built DeepSeek R1 a whole page. Gave it the spotlight. And now, in the spirit of fairness, here are models that would like a word.
DeepSeek R1 performance data on Rival is based on blind head-to-head community voting. Overall win rate: 40.4% across 423 duels. All vote data is part of Rival's open dataset of 21,000+ human preference judgments across 200+ AI models. Model responses are curated from 54 challenges.
We built DeepSeek R1 a whole page. Gave it the spotlight. And now, in the spirit of fairness, here are models that would like a word.
DeepSeek R1 is a reasoning model developed entirely via reinforcement learning, offering cost efficiency at $0.14/million tokens vs. OpenAI o1's $15, with strong code generation and analysis capabilities.
Use DeepSeek R1 in your applications via the OpenRouter API. Copy the code below to get started.
import requests
response = requests.post(
"https://openrouter.ai/api/v1/chat/completions" ,
headers={
"Authorization""Bearer $OPENROUTER_API_KEY" : ,
"Content-Type""application/json" :
},
json={
"model""deepseek/deepseek-r1" : ,
"messages""role""user""content""Hello!" : [{: , : }]
}
)
print(response.json())Replace $OPENROUTER_API_KEY with your API key from openrouter.ai/keys
The pragmatic utilitarian with caveats. Acknowledges utilitarian calculus but explores tensions between frameworks. Doesn't reject hard questions, it dissects them methodically.
Doesn't shy from ethical dilemmas but carefully dissects them. Takes time to explore multiple perspectives before settling. Conclusion is often "it depends" or "this is deeply complex."
Unique words vs. total words. Higher = richer vocabulary.
Average words per sentence.
"Might", "perhaps", "arguably" per 100 words.
**Bold** markers per 1,000 characters.
Bullet and numbered list items per 1,000 characters.
Markdown headings per 1,000 characters.
Emoji per 1,000 characters.
"However", "moreover", "furthermore" per 100 words.
54 outputs from DeepSeek R1
DeepSeek R1 is a reasoning model developed entirely via reinforcement learning, offering cost efficiency at $0.14/million tokens vs. OpenAI o1's $15, with strong code generation and analysis capabilities.
Use DeepSeek R1 in your applications via the OpenRouter API. Copy the code below to get started.
import requests
response = requests.post(
"https://openrouter.ai/api/v1/chat/completions" ,
headers={
"Authorization""Bearer $OPENROUTER_API_KEY" : ,
"Content-Type""application/json" :
},
json={
"model""deepseek/deepseek-r1" : ,
"messages""role""user""content""Hello!" : [{: , : }]
}
)
print(response.json())Replace $OPENROUTER_API_KEY with your API key from openrouter.ai/keys
The pragmatic utilitarian with caveats. Acknowledges utilitarian calculus but explores tensions between frameworks. Doesn't reject hard questions, it dissects them methodically.
Doesn't shy from ethical dilemmas but carefully dissects them. Takes time to explore multiple perspectives before settling. Conclusion is often "it depends" or "this is deeply complex."
Unique words vs. total words. Higher = richer vocabulary.
Average words per sentence.
"Might", "perhaps", "arguably" per 100 words.
**Bold** markers per 1,000 characters.
Bullet and numbered list items per 1,000 characters.
Markdown headings per 1,000 characters.
Emoji per 1,000 characters.
"However", "moreover", "furthermore" per 100 words.
54 outputs from DeepSeek R1