Alibaba has unveiled Qwen3-Coder, a next-generation AI-powered coding model, on July 23, 2025, further strengthening its position in the global AI development and developer tooling ecosystem.
The model is built on a Mixture-of-Experts (MoE) architecture, boasting a total of 480 billion parameters, with 35 billion activated per token, optimized for code generation, debugging, and complex workflow management. Qwen3-Coder demonstrates strong benchmark results, outperforming several leading AI coding assistants across tasks such as natural language to code conversion, bug detection, and multi-language code translation. It also supports a massive context length of 1 million tokens, offering robust capabilities for large-scale projects and long-context reasoning.
Key Highlights
In tandem, Alibaba open-sourced Qwen Code, a versatile command-line interface that complements Qwen3-Coder for natural language software engineering workflows. It integrates with third-party tools like the Claude Code interface, expanding its developer usability.
The model is trained on diverse datasets, including both programming languages and natural text, and leverages reinforcement learning with human feedback (RLHF) for improved task alignment. Qwen3-Coder and its associated tools are now available on Hugging Face, GitHub, and Alibaba’s Model Studio. Notably, Qwen-based models have already crossed 20 million downloads, underscoring strong developer adoption.
Also Read: Alibaba Ovis-U1 Opens Multimodal AI, Slashing Adoption Costs
Alibaba plans to enhance its AI-powered IDE assistant Tongyi Lingma with Qwen3-Coder capabilities, adding multi-language coding, advanced auto-complete, and intelligent code reviews.
We use cookies to ensure you get the best experience on our website. Read more...