Anthropic Warns Its New AI Could Enable ‘Weapons We Can’t Even Envision.’ Skeptics Aren’t Buying It.
The company says Claude Mythos could exploit critical infrastructure like power grids. Critics say it’s all a big marketing ploy.
Anthropic says its new model, Claude Mythos, has such catastrophic potential that the company doesn’t want to release it to the general public, reports CNN.
Mythos has found thousands of major security vulnerabilities and could exploit critical infrastructure like power grids and hospitals. AI researcher Roman Yampolskiy warned the model could enable “biological weapons, chemical weapons, novel weapons we can’t even envision.” For this reason, Anthropic is limiting access to about 40 handpicked companies — including Amazon, Google, Apple, Nvidia and CrowdStrike.
But critics, including President Trump’s AI adviser David Sacks, accuse Anthropic of “regulatory capture” — using safety warnings as a marketing strategy. Perry Metzger, chairman of AI policy group Alliance for the Future, said the hype has “spread like wildfire” as a result of the warning.
Anthropic says its new model, Claude Mythos, has such catastrophic potential that the company doesn’t want to release it to the general public, reports CNN.
Mythos has found thousands of major security vulnerabilities and could exploit critical infrastructure like power grids and hospitals. AI researcher Roman Yampolskiy warned the model could enable “biological weapons, chemical weapons, novel weapons we can’t even envision.” For this reason, Anthropic is limiting access to about 40 handpicked companies — including Amazon, Google, Apple, Nvidia and CrowdStrike.
But critics, including President Trump’s AI adviser David Sacks, accuse Anthropic of “regulatory capture” — using safety warnings as a marketing strategy. Perry Metzger, chairman of AI policy group Alliance for the Future, said the hype has “spread like wildfire” as a result of the warning.