Qwen

Qwen

Alibaba's AI model series featuring advanced multilingual capabilities and strong reasoning abilities.

14 Featured Models
14 Featured Text Models
Compare Models

Key Highlights

Developed by Alibaba Cloud.

Offers a range of models (small to 200B+ parameters), often open-weight.

Strong performance, especially in coding and multilingual benchmarks.

Features multimodal capabilities and MoE architecture in newer versions.

Tightly integrated with Alibaba's cloud ecosystem and ModelScope community.

Qwen3 Next 80B A3B Thinking

Sep 2025

Qwen3-Next-80B-A3B-Thinking is a reasoning-first chat model in the Qwen3-Next line that outputs structured thinking traces by default. It’s designed for hard multi-step problems; math proofs, code synthesis/debugging, logic, and agentic planning, and reports strong results across knowledge, reasoning, coding, alignment, and multilingual evaluations. Compared with prior Qwen3 variants, it emphasizes stability under long chains of thought and efficient scaling during inference, and it is tuned to follow complex instructions while reducing repetitive or off-task behavior. The model is suitable for agent frameworks and tool use (function calling), retrieval-heavy workflows, and standardized benchmarking where step-by-step solutions are required. It supports long, detailed completions and leverages throughput-oriented techniques for faster generation. Note that it operates in thinking-only mode.

conversationreasoningcode-generationanalysis

Qwen3 30B A3B Thinking 2507

Aug 2025

Qwen3-30B-A3B-Thinking-2507 is a 30B parameter Mixture-of-Experts reasoning model optimized for complex tasks requiring extended multi-step thinking. The model is designed specifically for 'thinking mode,' where internal reasoning traces are separated from final answers. Compared to earlier Qwen3-30B releases, this version improves performance across logical reasoning, mathematics, science, coding, and multilingual benchmarks. It also demonstrates stronger instruction following, tool use, and alignment with human preferences. With higher reasoning efficiency and extended output budgets, it is best suited for advanced research, competitive problem solving, and agentic applications requiring structured long-context reasoning.

conversationreasoningcode-generationanalysis

Qwen: Qwen3 235B A22B 2507

Jul 2025

Qwen3-235B-A22B-Instruct-2507 is a multilingual, instruction-tuned mixture-of-experts language model based on the Qwen3-235B architecture, with 22B active parameters per forward pass. It is optimized for general-purpose text generation, including instruction following, logical reasoning, math, code, and tool usage. The model supports a native 262K context length and does not implement "thinking mode" (<think> blocks). Compared to its base variant, this version delivers significant gains in knowledge coverage, long-context reasoning, coding benchmarks, and alignment with open-ended tasks. It is particularly strong on multilingual understanding, math reasoning (e.g., AIME, HMMT), and alignment evaluations like Arena-Hard and WritingBench.

conversationreasoningcode-generationanalysis