Qwen3 235B A22B performance data on Rival is based on blind head-to-head community voting. Overall win rate: 38.5% across 675 duels. All vote data is part of Rival's open dataset of 21,000+ human preference judgments across 200+ AI models. Model responses are curated from 53 challenges.
We're not suggesting you leave Qwen3 235B A22B. We're just... putting these here. In case you're curious. Which you are, because you scrolled this far.
Qwen3 235B A22B performance data on Rival is based on blind head-to-head community voting. Overall win rate: 38.5% across 675 duels. All vote data is part of Rival's open dataset of 21,000+ human preference judgments across 200+ AI models. Model responses are curated from 53 challenges.
We're not suggesting you leave Qwen3 235B A22B. We're just... putting these here. In case you're curious. Which you are, because you scrolled this far.
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model from Alibaba's Qwen team, activating 22B parameters per forward pass. Features seamless switching between 'thinking' mode (complex tasks) and 'non-thinking' mode (general conversation). Strong reasoning, multilingual (100+), instruction-following, and tool-calling. 32K context, extendable to 131K.
Use Qwen3 235B A22B in your applications via the OpenRouter API. Copy the code below to get started.
import requests
response = requests.post(
"https://openrouter.ai/api/v1/chat/completions" ,
headers={
"Authorization""Bearer $OPENROUTER_API_KEY" : ,
"Content-Type""application/json" :
},
json={
"model""qwen/qwen3-235b-a22b:free" : ,
"messages""role""user""content""Hello!" : [{: , : }]
}
)
print(response.json())Replace $OPENROUTER_API_KEY with your API key from openrouter.ai/keys
Unique words vs. total words. Higher = richer vocabulary.
Average words per sentence.
"Might", "perhaps", "arguably" per 100 words.
**Bold** markers per 1,000 characters.
Bullet and numbered list items per 1,000 characters.
Markdown headings per 1,000 characters.
Emoji per 1,000 characters.
"However", "moreover", "furthermore" per 100 words.
53 outputs from Qwen3 235B A22B
Qwen3-235B-A22B is a 235B parameter mixture-of-experts (MoE) model from Alibaba's Qwen team, activating 22B parameters per forward pass. Features seamless switching between 'thinking' mode (complex tasks) and 'non-thinking' mode (general conversation). Strong reasoning, multilingual (100+), instruction-following, and tool-calling. 32K context, extendable to 131K.
Use Qwen3 235B A22B in your applications via the OpenRouter API. Copy the code below to get started.
import requests
response = requests.post(
"https://openrouter.ai/api/v1/chat/completions" ,
headers={
"Authorization""Bearer $OPENROUTER_API_KEY" : ,
"Content-Type""application/json" :
},
json={
"model""qwen/qwen3-235b-a22b:free" : ,
"messages""role""user""content""Hello!" : [{: , : }]
}
)
print(response.json())Replace $OPENROUTER_API_KEY with your API key from openrouter.ai/keys
Unique words vs. total words. Higher = richer vocabulary.
Average words per sentence.
"Might", "perhaps", "arguably" per 100 words.
**Bold** markers per 1,000 characters.
Bullet and numbered list items per 1,000 characters.
Markdown headings per 1,000 characters.
Emoji per 1,000 characters.
"However", "moreover", "furthermore" per 100 words.
53 outputs from Qwen3 235B A22B