Stronger math, logic, and code generation now define DeepSeek’s R1-0528 model, released quietly on Hugging Face and already drawing praise from the AI community. The upgrade pushes DeepSeek’s open-source offering into direct competition with proprietary giants like OpenAI’s o3 and Google’s Gemini 2.5 Pro, signaling a rapid shift in the competitive landscape for advanced AI reasoning systems.

Major Performance Gains in Reasoning and Coding

DeepSeek’s R1-0528 model moves beyond its previous version with significant improvements in complex reasoning, mathematics, and programming tasks. Internal benchmarks show the model’s accuracy on the AIME 2025 math test jumping from 70% to 87.5%, while code generation accuracy on the LiveCodeBench dataset rises from 63.5% to 73.3%. On the demanding “Humanity’s Last Exam,” performance more than doubled, reaching 17.7% from 8.5%. These results demonstrate a tangible leap in the model’s ability to solve multi-step problems and tackle real-world coding challenges.

Users report that the new R1-0528 model can sustain longer, more in-depth reasoning chains, sometimes “thinking” for over 10 minutes on a single prompt. This deeper processing allows for more nuanced answers and reliable solutions, especially in technical domains. AI researchers and developers have noted that R1-0528’s coding output is cleaner and more robust, with working tests generated on the first try—a capability previously seen only in leading proprietary models.

Open Source and Flexible Deployment

DeepSeek continues to release its models under the MIT license, supporting both academic and commercial use without restrictive terms. The full R1-0528 model, with 685 billion parameters, is available for download and API access. While its size makes local deployment challenging for most consumer hardware, DeepSeek has also released a distilled variant, DeepSeek-R1-0528-Qwen3-8B, which operates efficiently on a single high-end GPU. This smaller model outperforms comparably sized models on math benchmarks and nearly matches Microsoft’s latest Phi 4 reasoning model, making it accessible to a wider range of developers and researchers.

For those integrating the model into their applications, DeepSeek’s API pricing remains competitive, with automatic upgrades to the latest R1-0528 version for existing users. The update also introduces new features like JSON output and function calling, streamlining workflow integration for developers.

Reduced Hallucination and Streamlined User Experience

One of the most persistent challenges in large language models is hallucination—when AI generates incorrect or fabricated information. R1-0528 makes measurable progress here, with a lower hallucination rate and more consistent, trustworthy outputs. The update also removes the need for special tokens to activate “thinking” mode, simplifying deployment and reducing friction for developers building on top of the model.

Front-end improvements and new system prompt capabilities further refine the user experience, allowing for smoother, more efficient interactions whether accessed via API or DeepSeek’s web platform.

Global AI Race Intensifies

DeepSeek’s rapid progress comes amid heightened competition between US and Chinese AI labs. The R1 model’s low development cost and quick iteration cycle have already disrupted global tech markets, challenging assumptions about the resource requirements for state-of-the-art AI. As US export controls tighten on advanced chips and software, Chinese firms like DeepSeek, Tencent, and Alibaba continue to optimize their models for efficiency and performance under constrained hardware conditions.

With 75 million downloads and 38 million monthly active users as of April, DeepSeek is quickly establishing itself as a serious player in global AI. The R1-0528 release, praised for its reasoning and coding strength, positions the company as a credible challenger to the dominance of OpenAI and Google in the high-stakes race for advanced language intelligence.


DeepSeek’s latest R1 update shows how open-source innovation is accelerating, giving developers and researchers new tools that rival the best closed systems—while keeping the AI race more interesting than ever.