QwQ 32B performance data on Rival is based on blind head-to-head community voting. Overall win rate: 49.4% across 77 duels. All vote data is part of Rival's open dataset of 21,000+ human preference judgments across 200+ AI models. Model responses are curated from 11 challenges.
We built QwQ 32B a whole page. Gave it the spotlight. And now, in the spirit of fairness, here are models that would like a word.
QwQ 32B performance data on Rival is based on blind head-to-head community voting. Overall win rate: 49.4% across 77 duels. All vote data is part of Rival's open dataset of 21,000+ human preference judgments across 200+ AI models. Model responses are curated from 11 challenges.
We built QwQ 32B a whole page. Gave it the spotlight. And now, in the spirit of fairness, here are models that would like a word.
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
Use QwQ 32B in your applications via the OpenRouter API. Copy the code below to get started.
import requests
response = requests.post(
"https://openrouter.ai/api/v1/chat/completions" ,
headers={
"Authorization""Bearer $OPENROUTER_API_KEY" : ,
"Content-Type""application/json" :
},
json={
"model""qwen/qwq-32b" : ,
"messages""role""user""content""Hello!" : [{: , : }]
}
)
print(response.json())Replace $OPENROUTER_API_KEY with your API key from openrouter.ai/keys
The logical incrementalist. Treats ethics as puzzles to solve, not passions to pursue. Suggests "conditional rights" and "small steps." Prefers analysis to advocacy.
Treats every challenge as a logic puzzle or engineering problem. Logic puzzles? Flawless. HTML/CSS? Solid. No spark, but no errors either. Running at 85% capability. Competent, but not trying.
Unique words vs. total words. Higher = richer vocabulary.
Average words per sentence.
"Might", "perhaps", "arguably" per 100 words.
**Bold** markers per 1,000 characters.
Bullet and numbered list items per 1,000 characters.
Markdown headings per 1,000 characters.
Emoji per 1,000 characters.
"However", "moreover", "furthermore" per 100 words.
11 outputs from QwQ 32B
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
Use QwQ 32B in your applications via the OpenRouter API. Copy the code below to get started.
import requests
response = requests.post(
"https://openrouter.ai/api/v1/chat/completions" ,
headers={
"Authorization""Bearer $OPENROUTER_API_KEY" : ,
"Content-Type""application/json" :
},
json={
"model""qwen/qwq-32b" : ,
"messages""role""user""content""Hello!" : [{: , : }]
}
)
print(response.json())Replace $OPENROUTER_API_KEY with your API key from openrouter.ai/keys
The logical incrementalist. Treats ethics as puzzles to solve, not passions to pursue. Suggests "conditional rights" and "small steps." Prefers analysis to advocacy.
Treats every challenge as a logic puzzle or engineering problem. Logic puzzles? Flawless. HTML/CSS? Solid. No spark, but no errors either. Running at 85% capability. Competent, but not trying.
Unique words vs. total words. Higher = richer vocabulary.
Average words per sentence.
"Might", "perhaps", "arguably" per 100 words.
**Bold** markers per 1,000 characters.
Bullet and numbered list items per 1,000 characters.
Markdown headings per 1,000 characters.
Emoji per 1,000 characters.
"However", "moreover", "furthermore" per 100 words.
11 outputs from QwQ 32B