QwQ 32B

QwQ 32B

QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.

ConversationReasoningCode GenerationAnalysis
Provider
Qwen
Release Date
2025-03-05
Size
LARGE
Parameters
32B

Benchmark Performance

Performance metrics on industry standard AI benchmarks that measure capabilities across reasoning, knowledge, and specialized tasks.

Throughput

430.1 tokens/s
View Source

Latency

4.54s
View Source

Model Insights

All Model Responses

Sponsored
Ad

Sponsored Content

Advertisement

Native Advertisement

Related Models

Qwen: Qwen3 235B A22B 2507 logo

Qwen: Qwen3 235B A22B 2507

Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.

ConversationReasoningCode Generation+1 more