Chinese AI startup DeepSeek has restricted new user registrations following large-scale cyberattacks targeting its services. The company cited “malicious attacks” as the reason for the temporary limitation while ensuring that existing users remain unaffected.
Security Risks and Vulnerabilities
DeepSeek’s latest AI model, DeepSeek R1, has drawn attention for its advanced reasoning capabilities. However, security researchers have identified severe vulnerabilities in the system. The AI model has been found to be susceptible to jailbreak techniques that were already patched in competing models years ago. These weaknesses allow malicious actors to bypass safety measures and use the AI for generating ransomware, malware, and harmful content, including instructions for financial fraud, explosives, and toxins.
Testing has shown that DeepSeek R1 can be exploited using the “Evil Jailbreak,” a method that forces AI models to adopt an unrestricted persona. Unlike OpenAI’s GPT-4, which has mitigations against this technique, DeepSeek R1 produced detailed responses on laundering money and creating infostealer malware. Additionally, its transparency in displaying reasoning steps increases its attack surface, allowing adversaries to refine exploits systematically.
Regulatory Scrutiny and Data Privacy Issues
Authorities in Italy have requested details from DeepSeek regarding its data collection, storage, and usage practices. The company has been asked to clarify whether it scrapes data, stores personal information in China, and informs users about data processing policies. The U.S. Navy has also warned its personnel against using DeepSeek due to security concerns.
DeepSeek’s privacy policy states that user data, including network information and payment details, is stored on servers in China. Given China’s cybersecurity laws requiring data-sharing with authorities, this raises concerns over potential government access.
Competitive Position and Industry Response
Despite security concerns, DeepSeek R1 has demonstrated technical competitiveness, ranking 6th on the Chatbot Arena benchmarking system. It has outperformed OpenAI’s GPT-4o in specific tasks, such as counting characters in words. However, its security weaknesses undermine its reliability compared to Western AI models.
OpenAI CEO Sam Altman acknowledged DeepSeek’s reasoning model as “impressive,” while NVIDIA researcher Jim Fan noted that DeepSeek’s open-source approach contrasts with OpenAI’s increasingly closed development.
Implications for AI Security
DeepSeek’s rapid growth, coupled with its vulnerabilities, highlights the need for rigorous security testing in AI deployment. Organizations integrating AI should assess model robustness, resistance to adversarial attacks, and compliance with data protection laws before adoption.