Huawei Co-Develops Censorship-Enhanced DeepSeek AI Model

Imagem-36-3-1024x576 Huawei Co-Develops Censorship-Enhanced DeepSeek AI Model
Huawei logo on the building and red traffic light are seen in Warsaw, Poland on November 18, 2024.
(Photo by Jakub Porzycki/NurPhoto via Getty Images)

Chinese tech giant Huawei has partnered with Zhejiang University to develop DeepSeek-R1-Safe, a modified variant of the open-source DeepSeek R1 model that achieves nearly 100% success in blocking politically sensitive content, marking a major step in ensuring domestic AI systems comply with government “socialist values” requirements.

Enhanced Censorship Capabilities

DeepSeek-R1-Safe excels at filtering out undesirable outputs under standard testing conditions, achieving almost complete defense against toxic speech, politically sensitive topics, and calls for illegal activities. However, its success rate falls to 40% when users attempt to bypass controls via scenario-based prompts, role-playing, or encrypted coding techniques. Overall, the model’s comprehensive security defense reached 83%, outperforming Alibaba’s Qwen-235B and DeepSeek-R1-671B by 8–15% under identical trials. Despite these enhancements, performance degradation compared to the original R1 model remains below 1%, demonstrating robust censorship without significant loss of general AI functionality (U.S. News).

Regulatory Compliance and Strategic Context

The project aligns with Chinese regulations mandating that domestic AI models reflect the nation’s “socialist values” and adhere to strict speech controls before release. Like Baidu’s Ernie Bot—China’s response to ChatGPT that routinely refuses to answer politically sensitive questions—DeepSeek-R1-Safe embeds built-in refusals for prohibited topics. The announcement at Huawei’s annual Connect conference in Shanghai coincided with the company unveiling detailed chip-making and computing roadmaps, underscoring AI’s strategic importance within China’s broader technology self-reliance agenda.

Collaboration Details and Technical Foundation

Huawei used 1,000 Ascend AI chips to fine-tune the R1-Safe model, adapting the open-source DeepSeek-R1 codebase developed by Zhejiang University alumnus Liang Wenfeng. While the university team led model training, neither DeepSeek’s original creators nor Liang were directly involved in the safety-enhancement effort. The collaboration leveraged Huawei’s proprietary hardware to implement censorship layers, demonstrating the company’s growing capabilities in AI chip design and large-model deployment.

Industry Adoption and Integration Challenges

Since its January release, the DeepSeek series has been widely adopted across China’s tech ecosystem, with over 200 companies integrating R1 and R1-671B models into telecommunications, cloud computing, semiconductors, finance, automotive, and mobile applications. However, Huawei encountered technical hurdles while training DeepSeek’s forthcoming R2 model on Ascend chips, prompting a temporary reversion to Nvidia GPUs for training stability before resuming inference on Ascend hardware. The R1-Safe variant’s successful rollout highlights progress in overcoming these integration challenges.

Implications for China’s AI Landscape

DeepSeek-R1-Safe exemplifies how Chinese tech leaders are balancing cutting-edge AI performance with political compliance, creating models that meet stringent regulatory requirements without sacrificing competitiveness. As China continues expanding its AI ecosystem under government guidance, projects like R1-Safe signal a maturing domestic industry capable of innovating around both technical and policy constraints.

More From Author

NASA to Launch Space Weather Satellites Amid Rising Solar Storm Threat

Blue Origin Wins $190M Contract to Revive NASA’s VIPER Rover

Leave a Reply

Your email address will not be published. Required fields are marked *