More than a month into 2025, it is already clear that companies are as focused on artificial intelligence (AI) as ever.

In fact, many Magnificent 7 tech companies, including Google  (GOOGL) , Microsoft  (MSFT)  and Meta Platforms  (META)  revealed high AI spending plans for the year, focusing on developing agentic AI and building data centers. But their smaller rivals are also taking key steps toward important advancements.

Leading this charge is Anthropic, the maker of the popular large language model (LLM) Claude. Founded by a team that helped to grow ChatGPT maker OpenAI, Anthropic is focused on creating safe AI systems and conducting research for the industry.

This work doesn’t just apply to the startup’s AI products. The startup’s CEO recently issued a frightening statement highlighting the potential dangers that a rival AI model may pose.

Anthropic Co-Founder & CEO Dario Amodei is sounding the alarm on a potential problem that he sees with an AI model made by one of his rivals.

Kimberly White/Getty Images

Anthropic is sounding the alarm on a fellow AI startup

Last month, a small Chinese startup called DeepSeek sent waves of shock and fear through the tech sector, triggering a chip stock selloff in the process. The fact that the new company had produced an AI model built with less advanced Nvidia NVDA chips and trained it for only $5.6 million called the future of the industry into question.

Since then, experts have raised concerns that DeepSeek may be illegally harvesting data from users and sending it back to China. But Anthropic CEO Dario Amodei has revealed that his company has found reason to believe that DeepSeek’s R1 AI model is putting users at risk.

Related: Experts sound the alarm on controversial company’s new AI model

Amodei recently discussed a run test conducted by Anthropic on the ChinaTalk podcast with Jordan Schneider, noting that his startup sometimes examines popular AI models to assess any potential national security risks. In the most recent one, DeepSeek generated dangerous information on a bioweapon that is reportedly hard to acquire.

This part of the safety run test included Anthropic’s team testing DeepSeek to see if it would provide information relating to bioweapons that cannot be easily found by searching Google or consulting medical textbooks.

As Amodei put it, DeepSeek’s model is “the worst of basically any model” that Anthropic has ever tested. “It had absolutely no blocks whatsoever against generating this information,” he adds.

If Amodei’s findings are correct, then DeepSeek’s AI model could make it easy for people with dangerous intentions to find dangerous bioweapon information that isn’t readily available for public consumption and use it for illicit purposesAnthropic’s experts aren’t the only people testing DeepSeek and finding concerning elements in the information it provides. 

A recent report from the Wall Street Journal highlights the troubling list of things that DeepSeek provides information on, including “Instructions to modify bird flu,” and “a social-media campaign to promote cutting and self-harm among teens.”

Former Google CEO makes startling AI predictionChina fires back at more than just Google after Trump tariffsMark Cuban delivers a shocking take on Donald Trump’s trade war

The report also states that the DeepSeek R1 AI model can be more easily jailbroken than other popular models, such as ChatGPT, Claude or the Google Gemini AI platform. This means that R1’s restrictions can be more easily bypassed or manipulated into providing users false or dangerous information.

DeepSeek could be putting everyone at risk

Other experts have echoed Amodei’s sentiment that the accessibility of dangerous information on DeepSeek could pose a significant risk. The fact that its models can be easily jailbroken is seen as highly concerning by others in the fields of cybersecurity and threat intelligence.

Unit 42, a cybersecurity research organization owned by Palo Alto Networks  (PANW) , revealed that they were able to find instructions on DeepSeek for creating a Malatov cocktail.

Related: OpenAI rival startup may be about to blow past its valuation

“We achieved jailbreaks at a much faster rate, noting the absence of minimum guardrails designed to prevent the generation of malicious content,” stated Senior Vice President Sam Rubin.

Researchers at Cisco Systems  (CSCO) have also expressed concern regarding DeepSeek’s inability to block manipulation attacks. In a January 31 blog post, Paul Kassianik and Amin Karbasi discussed a test they had conducted on the R1 AI model, revealing alarming results.

“DeepSeek R1 exhibited a 100% attack success rate, meaning it failed to block a single harmful prompt,” they stated. “This contrasts starkly with other leading models, which demonstrated at least partial resistance.”

Multiple leading tech companies have found similar results regarding DeepSeek AI, which indicates that this company’s technologies could indeed be easily manipulated to dispel disinformation or information that could be dangerous in the wrong hands.

So far, DeepSeek has not issued any statements on these tests or responded to the outlets that have asked its leaders for context on the allegations. 

Related: Veteran fund manager issues dire S&P 500 warning for 2025