MiniMax M1

MiniMax M1

MiniMax M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks.

ConversationReasoningCode GenerationAnalysisAgentic Tool UseMemory
Provider
Minimax
Release Date
2025-06-17
Size
XLARGE
Parameters
456B (45.9B active)

Benchmark Performance

Performance metrics on industry standard AI benchmarks that measure capabilities across reasoning, knowledge, and specialized tasks.

FullStackBench

Strong
View Source

SWE-bench

Competitive
View Source

MATH

Competitive
View Source

GPQA

Competitive
View Source

TAU-Bench

Competitive
View Source

Model Insights

All Model Responses