
Anthropic announced Claude Mythos Preview on April 7. The company described the model as too powerful for general release. It said Mythos had discovered thousands of zero-day vulnerabilities in major operating systems and web browsers. Access would be restricted through Project Glasswing, a controlled consortium including Amazon Web Services, Google, JPMorganChase, Microsoft, Nvidia, Cisco, CrowdStrike and other major technology and security firms.
The story dominated the technology press from April 7 through mid-April.
It dominated because Anthropic had not only announced a model. It had announced a business.
Mythos is presented as a cybersecurity tool. It can crawl through code, find vulnerabilities and, according to Anthropic, exploit them. That makes it dangerous, the company says, if released broadly. But it also makes Mythos indispensable to the corporations and state agencies being told they face a new wave of AI-driven cyberattacks.
That is the business model. Create the alarm. Restrict the tool. Sell access to protection.
Anthropic’s public argument is simple: models like Mythos will soon make cyberattacks easier, faster and more dangerous. Therefore, the most powerful version cannot be released to the public. It must be placed in the hands of trusted partners who maintain major software systems and critical infrastructure.
But those “trusted partners” are not the public. They are the monopolies that already control the internet’s infrastructure: cloud platforms, chipmakers, cybersecurity firms, banks and major software companies. The same corporations that dominate the digital economy are being invited into a private security arrangement around a model the public is told is too dangerous to use.
Mythos is a protection story as much as a technology story.
Anthropic’s message is clear: The danger is coming. We have the tool that understands it. Releasing the tool would make the danger worse. Therefore, access must run through us.
Safety language does the market-forming work.
Danger makes controlled access more valuable — even if similar bug-finding work can be done with open-source models and human security expertise. Restriction makes the consortium look necessary. The consortium makes Anthropic look like the gatekeeper for the next stage of cybersecurity.
The company does not need everyone to use Mythos. It needs governments, software monopolies, cloud platforms, banks and security firms to believe they cannot afford to be outside the circle.
The timing gives the announcement another edge. Bloomberg reported in late March that Anthropic was weighing an IPO as soon as October, with a potential raise above $60 billion, following early discussions with Wall Street banks. Bloomberg reported again in April that Anthropic had received investor offers valuing the company at around $800 billion or higher — offers the company had so far declined. By late April, secondary-market trading on Forge Global pushed Anthropic’s implied valuation to $1 trillion.
Whatever Mythos can or cannot do, every headline about it still pays off financially. A model described as too dangerous for general release also tells investors that Anthropic controls something scarce, powerful and indispensable.
The rollout had its own complications. Reports said unauthorized users had gained access to Mythos through a third-party vendor environment after its restricted launch. Even a breach folds back into the sales pitch: a leak or vendor failure makes the case that this technology belongs under tighter corporate supervision.
Several AI researchers and industry figures pushed back. Gary Marcus, a cognitive scientist and AI researcher at New York University, called the risk “overblown” and said researchers had been “played.” Yann LeCun, one of the original neural network pioneers, dismissed Mythos as “BS from self-delusion.” Heidy Khlaaf, chief AI scientist at the AI Now Institute, criticized the vague language around the announcement and the lack of clear metrics needed to verify the strongest claims.
Britain’s AI Security Institute, which tested the model, offered a more limited assessment. It found that Mythos represented a step up over previous frontier models on cybersecurity benchmarks. But the tests were carried out in controlled environments that lacked active defenders or defensive tooling. Real-world systems would be harder targets.
A report from The Register sharpened the point. Ari Herbert-Voss, CEO of the AI security startup RunSybil and the first security researcher at OpenAI, told Black Hat Asia that open-source models can find bugs as effectively as Mythos when they are properly scaffolded — that is, organized to run together in a coordinated workflow. Different models catch different flaws, he said, which can strengthen defense by reducing dependence on any single system’s blind spots.
That cuts against Anthropic’s central claim. If comparable bug-finding power can be built with open-source models and human expertise, the issue is not simply whether Mythos is technically powerful. It is who controls the workflow, who sells access and who gets positioned as the necessary gatekeeper.
Herbert-Voss also pointed to the labor question buried inside the AI hype. Human expertise is still needed to organize the models, assess their bug reports and separate real vulnerabilities from noise. AI bug-hunters, like the automated fuzzing tools that came before them, can generate so many warnings that they create more work for security teams. The technology does not eliminate labor. It reorganizes it around expensive systems built to justify more GPUs, more data centers and more corporate dependence.
Anthropic’s framing invited readers to picture an AI system loose in the digital infrastructure of society. The testing shows something narrower. Mythos found bugs in a lab, with no live defenders, with security researchers organizing the runs, on hardware most organizations cannot afford. That is a useful tool. It is not an autonomous attacker loose in the world’s networks.
The technology may eventually do everything Anthropic claims. But that is not the only question. The class question is who owns it, who controls access, who defines the danger and who profits from the answer.
Anthropic is not a public research institute. It is a private corporation backed by some of the largest concentrations of capital in the world. Its investors and partners include cloud monopolies, chipmakers, finance houses and Wall Street banks. Its future depends on convincing those same forces that AI capability is rising fast enough — and becoming dangerous enough — to justify deeper dependence on Anthropic.
Under capitalism, new technology is developed as private property that must yield a profit. A tool becomes an asset, a problem becomes a market, a danger becomes a moat.
Anthropic is not simply warning the world about AI cyberattacks. It is positioning itself as the toll gate through which protection must pass.
Claude Mythos is a business built on danger — a protection racket for the age of artificial intelligence.
Join the Struggle-La Lucha Telegram channel