
Experimental Release Before National Day
Chinese AI start-up DeepSeek has announced the release of its V3.2-Exp model, calling it an experimental version of its flagship V3 foundation model. The launch came just ahead of China’s National Day holiday, marking yet another rapid move in the company’s product roadmap.
The model is now open-sourced on developer platforms Hugging Face and ModelScope. Users can also access it through DeepSeek’s website and app. According to the company, V3.2-Exp improves both training and inference efficiency while cutting API costs by more than 50 percent compared to earlier versions.
A Step After V3.1-Terminus
The new launch closely follows the release of V3.1-Terminus, which debuted only last week. Earlier in December, DeepSeek introduced the V3 model, while July brought the V3.1 version. By releasing multiple updates in short succession, the company signaled an aggressive push toward refining performance and preparing for the next generation of large AI systems.
Artificial Intelligence experts noted that such incremental launches keep developers engaged while introducing new features step by step.
Sparse Attention for Greater Efficiency
V3.2-Exp incorporates a new DeepSeek Sparse Attention mechanism. This innovation allows the model to handle longer inputs more efficiently, reducing computational pressure from training. Sparse attention technology helps large language models process information faster while keeping costs lower, aligning with industry demand for efficiency.
DeepSeek stated that V3.2-Exp’s performance matches V3.1-Terminus but is significantly cheaper to deploy. This makes the experimental version attractive both for enterprise usage and academic research.
Benchmark Results and Global Standing
AI benchmarking platform Artificial Analysis recently ranked DeepSeek’s V3.1-Terminus alongside OpenAI’s gpt-oss-120b as the strongest open-source models available worldwide. The analysis further showed DeepSeek leading slightly ahead of Alibaba Cloud’s Qwen3-235B-2507, positioning it as China’s top AI model today.
This standing reflects the growing strength of Chinese companies in advanced AI development and their readiness to challenge established players in the US.
Industry Race Towards Efficiency
Alibaba Cloud also recently introduced its Qwen3-Next AI model architecture, focusing on smaller, more efficient versions with improved deployment performance. Like DeepSeek, Alibaba is exploring how to balance high performance with reduced costs.
The competitiveness between these firms highlights the wider AI race in China, where companies push updates faster than before.
Preparing for Next-generation AI
While the market speculated that DeepSeek might release a major flagship update during the National Day holiday, such as V4 or R2, the company instead clarified that V3.2-Exp serves as an “intermediate step” toward next-generation models.
Researchers believe V4 may appear next year, while R2 could launch around the Lunar New Year. Until then, DeepSeek is experimenting with architectures that prepare its AI systems for more powerful agentic capabilities.
Towards the Agent Era
DeepSeek previously outlined plans to enhance the agentic features of its base models. These AI agents are designed to complete tasks autonomously on behalf of users. Current model limitations, particularly smaller context windows, remain a challenge in enabling full autonomy.
By introducing advancements like sparse attention, DeepSeek aims to overcome such barriers and build the foundation for future agent-based AI.
Global Attention Remains High
Ever since the company launched its R1 reasoning model in January, DeepSeek has drawn attention from both domestic and global AI observers. The release generated excitement across the tech community and forced competitors to speed up their own launch cycles.
Experts and researchers now monitor closely how DeepSeek plans its next generation of AI chips and integrates them into future systems.