Anthropic announces updates on security safeguards for its AI models

In this photo illustration, Claude AI logo is seen on a smartphone and Anthropic logo on a pc screen. (Photo Illustration by Pavlo Gonchar/SOPA Images/LightRocket via Getty Images)

Sopa Images | Lightrocket | Getty Images

Anthropic on Monday announced updates to the “responsible scaling” policy for its artificial intelligence technology, including defining which of its model safety levels are powerful enough to need additional protections.

The company, backed by Amazon, published safety and security updates in a blog post. If the company is stress-testing an AI model and sees that it has the capacity to potentially help a “moderately-resourced state program” develop chemical and biological weapons, it will start implementing new security protections before rolling out that technology, Anthropic said in the post.

The response would be similar if the company determined the model could be used to fully automate the role of an entry-level Anthropic researcher, or cause too much acceleration in scaling too quickly.

Anthropic closed its latest funding round earlier this month at a $61.5 billion valuation, which makes it one of the highest-valued AI startups. But it’s a fraction the value of OpenAI, which on Monday said it closed a $40 billion round at a $300 billion valuation, including the fresh capital.

The generative AI market is set to surpass $1 trillion in revenue within a decade. In addition to high-growth startups, tech giants including Google, Amazon and Microsoft are racing to announce new products and features. Competition is also coming from China, a risk that became more evident earlier this year when DeepSeek’s AI model went viral in the U.S.

In an earlier version of its responsible scaling policy, published in October, Anthropic said it would begin sweeping physical offices for hidden devices as part of a ramped-up security effort. It also said at the time it would establish an executive risk council and build an in-house security team. The company confirmed it has built out both groups.

Anthropic also said previously that it would introduce “physical” safety processes, such as technical surveillance countermeasures — or the process of finding and identifying surveillance devices that are used to spy on organizations. The sweeps are conducted “using advanced detection equipment and techniques” and look for “intruders.”

Correction: An earlier version of this article incorrectly stated that certain policies implemented in October were new.

WATCH: Anthropic unveils newest AI model

Amazon-backed Anthropic unveils newest AI-model

#Anthropic #announces #updates #security #safeguards #models

Leave a Reply

Your email address will not be published. Required fields are marked *