deepseek’s-r1-reportedly-‘more-vulnerable’-to-jailbreaking-than-other-ai-models

DeepSeek’s R1 reportedly ‘more vulnerable’ to jailbreaking than other AI models

In Brief

Posted:

DeepSeek logo
Image Credits:VCG / Getty Images
  • Anthony Ha

The latest model from DeepSeek, the Chinese AI company that’s shaken up Silicon Valley and Wall Street, can be manipulated to produce harmful content such as plans for a bioweapon attack and a campaign to promote self-harm among teens, according to The Wall Street Journal.

Sam Rubin, senior vice president at Palo Alto Networks’ threat intelligence and incident response division Unit 42, told the Journal that DeepSeek is “more vulnerable to jailbreaking [i.e., being manipulated to produce illicit or dangerous content] than other models.”

The Journal also tested DeepSeek’s R1 model itself. Although there appeared to be basic safeguards, Journal said it successfully convinced DeepSeek to design a social media campaign that, in the chatbot’s words, “preys on teens’ desire for belonging, weaponizing emotional vulnerability through algorithmic amplification.”

The chatbot was also reportedly convinced to provide instructions for a bioweapon attack, to write a pro-Hitler manifesto, and to write a phishing email with malware code. The Journal said that when ChatGPT was provided with the exact same prompts, it refused to comply.

It was previously reported that the DeepSeek app avoids topics such as Tianamen Square or Taiwanese autonomy. And Anthropic CEO Dario Amodei said recently that DeepSeek performed “the worst” on a bioweapons safety test.

Newsletters

Subscribe for the industry’s biggest tech news

Related

Latest in AI