A few hours before Anthropic announced the launch of its newest model, Claude Mythos Preview, on April 7, I had just completed a six-month analysis of AI-enabled cyberattacks. My research traced Chinese state-sponsored cyber campaigns against U.S. critical infrastructure and found that the barrier between nation-state-level hacking and everyone else was eroding far too fast.
By the time I closed my laptop that afternoon, Mythos had shattered that barrier. This new model could theoretically autonomously exploit previously unknown vulnerabilities in virtually every major operating system and web browser on Earth, without human supervision. My threat model, seemingly alarmist at breakfast, was too conservative by dinner.
For years, Anthropic’s CEO Dario Amodei has kept copies of Richard Rhodes’ The Making of the Atomic Bomb on the company’s coffee tables, pressing the book on employees and interviewers alike. His thesis was that the scientists who built the most transformative weapon in history also failed to control how it would be used.
Mythos is Anthropic’s nuclear moment. Not in destructive equivalence, since no zero-day exploit has killed people at the scale of a nuclear weapon, but in the sense Amodei suggests: a weapon with a seismic destructive capability that its makers may be unable to control.
The atomic bomb was not just a “bigger bomb.” Nuclear weapons transformed coercion logic, allowing any state with such weapons to coerce other states in a way that, historically, would only have been possible by defeating them in battle. Mythos promises nearly anyone a coercive power, which, until recently, was the domain of only the strongest governments. The model erases the “state actors” premise in the U.S. doctrine of persistent engagement, wherein rival states’ network penetrations are stabilized by U.S. counter-penetrations. It is a recipe for chaos and asymmetry in the wielding of cyber power.
Like the atomic bomb, Mythos marks a step change that calls into question all prior cyber deterrence logic. The current U.S. response is not moving fast enough against this spiraling threat. This article is a case for why chaotic asymmetry is now inevitable, why defensive AI cannot close the gap in time, and two steps the U.S. government should take as the window of opportunity closes.
The Mythos Moment
Given just 24 hours, Mythos autonomously identified and exploited a 17-year-old remote code execution vulnerability in FreeBSD, an important operating system favored in high-security server environments, granting unauthenticated root access to any machine running it. Unauthenticated root access means an attacker with no credentials or prior foothold gains complete administrative control over a system. Engineers at Anthropic with no formal security training asked the model to find remote code execution vulnerabilities overnight, and woke up the next morning to working exploits.
Whereas prior Anthropic models converted known vulnerabilities into working exploits at a low success rate, Mythos does so 72.4 percent of the time. It has found thousands of zero-day vulnerabilities (previously unknown flaws), some of them hidden for decades through generations of security audits and automated tools that had probed the same code countless times without finding them. Amodei says that more powerful models are coming, and the United States needs a plan to respond.
But America doesn’t have one yet, and neither does anyone else. Mythos remains locked behind Anthropic’s own gates, released only to a curated group of companies through Project Glasswing. This initiative, announced on April 7, will give companies like Google, Cisco, and Microsoft select access (along with a hundred million dollars in usage credits) specifically for defensive security work. The rest of the world outside Glasswing, including governments and the operators of critical infrastructure, waits.
The obvious question is why critical infrastructure operators cannot simply use Mythos or another model themselves to scan for vulnerabilities. There are two reasons. First, the necessary defensive capabilities are in the “same model” as the offensive ones. Releasing it more widely, even for defensive purposes to select organizations, would risk the very proliferation event Anthropic is racing to prevent. Second, even if access were granted, most water utilities and hospital networks lack the funding for security teams and patch infrastructure to act quickly on what the model would find. Glasswing is a gate for now, but historically, gates have not kept nation-state-grade capabilities contained for long.
Prior offensive cyber capabilities of this magnitude were built inside a state intelligence apparatus, deployed by that state’s operators, and — when they escaped — did so catastrophically. The National Security Agency’s Equation Group tools, leaked by the Shadow Brokers in 2017, became WannaCry and NotPetya: ransomware that shut down hospitals and crippled shipping companies, causing ten billion dollars in global damage.
Those weapons originated inside classified compartments with at least nominal accountability structures. Mythos originated in a commercial product roadmap, was accidentally exposed in a publicly accessible data cache before its creators were ready to announce it, and will (by Anthropic’s own estimate) be matched by open-source models, foreign programs, and uncontrolled actors within six to 18 months.
The Defense Process Is Already Dead
Fewer than one percent of the vulnerabilities Mythos has found have been patched. That is a stark indictment of the disclosure architecture that cybersecurity has built over the past decades.
That architecture was designed for a world where vulnerabilities were manually discovered one at a time and patched over a period of weeks or months. Mythos finds thousands of vulnerabilities in weeks, a volume that promises to overwhelm the existing patch cycle. The assumption that defenders can keep pace with disclosure is failing. The quarterly patch cycle was already too slow, and Mythos will completely end it.
A natural counterargument is that defensive AI can keep pace. Anthropic argues that in the long run, AI-enabled vulnerability discovery and patching should benefit defenders as much as attackers. While this is plausible, the problem is that for the immediate future, Anthropic estimates that equivalent capabilities will exist elsewhere within six to 18 months (an optimistic upper bound in my view). Even the most aggressive investment in defensive AI will not close the offense-defense gap in cybersecurity.
Previously, cybersecurity scholars have warned against overstating this gap. Their existing doctrine holds that: offensive advantage in cyber comes from poor defensive management, rather than any structural attacker advantage; capability rarely translates to decisive political effects; and attribution difficulty constrains attackers as much as defenders. All three of these warnings were relevant in a world before Mythos, but not now.
An offensive model needs to find one exploitable vulnerability in a target system. A defensive one needs to find and patch every vulnerability across every system, continuously, before adversaries find any of them. Attackers operate at the speed of a prompt, while defenders operate at the speed of a patch cycle embedded in a bureaucracy (procurement processes, change management workflows, vendor certification requirements, and regulatory approval timelines). No amount of investment in defensive AI will reform those institutional processes in the short term.
Anthropic’s Response Is Not a Solution to the Chaos
Intelligence services in China and Russia have invested heavily in offensive cyber operations. In November 2025, Anthropic discovered Chinese state-sponsored groups running a coordinated campaign using Claude, targeting roughly thirty organizations, including technology companies, financial institutions, and government agencies, before the company disrupted it.
The central danger may well derive from non-state actors. The DarkSide ransomware group’s 2021 attack on Colonial Pipeline (a high-water mark of non-state offensive cyber capability at the time) was a complex undertaking. The team of experienced operators gained a compromised password from a separate data breach and needed extensive criminal infrastructure to collect the ransom. A Mythos-class model handed to a motivated amateur will shrink that timeline.
What makes this shift different from prior cyber capabilities is its speed and reach. When nation-states developed advanced offensive cyber tools, the proliferation timeline was measured in years or decades, from development, to theft or leak, to adversary replication, to non-state adoption. Mythos compresses that arc into a single model release cycle.
Rather than releasing Mythos publicly or handing it to a government partner under a classification stamp, Anthropic launched Project Glasswing. The company also briefed the Cybersecurity and Infrastructure Security Agency and the Department of Commerce before launch.
While a promising start, Project Glasswing’s reach does not match the scale of the problem. Its partners, fifty-plus companies in total, are among the world’s most sophisticated software operators. But at the time of writing, there is no government-announced mechanism by which a municipal water authority or a regional hospital network can access Mythos-class defensive scanning. The Cybersecurity and Infrastructure Security Agency, itself under-resourced and still finalizing its basic incident reporting rules, has no announced authority or funding to serve as the conduit. The one hundred million dollars in usage credits committed to Project Glasswing should have a portion set aside specifically for under-resourced critical infrastructure operators moving forward, with the Cybersecurity and Infrastructure Security Agency or the sector-specific risk management agencies as the distribution mechanism.
What America Should Do Before Chaotic Asymmetry Arrives
The window of time until AI unleashes chaotic asymmetry is closing. Previous efforts provide a baseline for further action. The Biden administration’s 2023 executive order on AI safety required frontier model developers to share safety test results with the government before public release. The Cybersecurity and Infrastructure Security Agency has been developing mandatory cyber incident reporting rules under the 2022 Cyber Incident Reporting for Critical Infrastructure Act that would, for the first time, compel operators to disclose breaches within 72 hours. The Office of the Director of National Intelligence Annual Threat Assessment has explicitly flagged AI-enabled cyber operations as a tier-one national security threat.
But none of these policies was designed for Mythos. The executive order governs pre-release safety testing, not post-proliferation capability management. The Cybersecurity and Infrastructure Security Agency incident reporting rules help us learn from breaches, but only after they occur. The Office of the Director of National Intelligence threat assessment describes the problem, but does not prescribe a response. Collectively, they form a pre-Mythos toolkit for a post-Mythos world. The appropriate response is a whole-of-government posture that treats the proliferation timeline as a hard deadline and coordinates the Departments of Defense, State, Energy, and Treasury under a unified strategic framework before the window closes. In practice, that means a single senior official, at the level of the National Security Advisor, with explicit authority to compel action across agencies and a classified threat briefing shared with critical infrastructure operators within thirty days.
The second step is the hardest: The patch cycle should be replaced. The existing disclosure framework for patching was built for a pre-AI world. What replaces it will require mandatory real-time vulnerability sharing between critical infrastructure operators, government, and the small number of companies with Mythos-class defensive capabilities. Two decades of failed mandatory disclosure legislation mean that even a good-faith attempt will face years of legal and political friction. The realistic near-term goal is narrower: a pilot program focused on the sectors where a successful attack would be most catastrophic, with legal protections that give companies a reason to participate.
Approximately 85 percent of U.S. critical infrastructure is privately owned, and Congress has spent two decades unable to make information sharing mandatory. The US has instead settled for voluntary frameworks that companies can and do ignore. But even with the right legal authority, automated patching is not a technical switch waiting to be flipped. A vulnerability in a widely used email client may trace back to an open-source library that dozens of other applications depend on, so patching it would require coordinating multiple maintainers and verifying nothing else breaks. Software running a pacemaker or a nuclear plant control system presents an even harder problem: You cannot push an update to an implanted device or halt a reactor for a patch cycle. No framework will patch everything in time. America’s goal should be to dramatically narrow the window of exposure for the highest-consequence systems, while building redundancy and manual fallbacks around what cannot be patched. With that caveat, here are two precedents showing what is achievable.
The Federal Aviation Authority’s Emergency Airworthiness Directive system allows the agency to ground entire fleets immediately when an unsafe condition is identified, bypassing the normal notice-and-comment rulemaking process (e.g., Boeing 737 MAX in 2018). The Nuclear Regulatory Commission’s cybersecurity framework requires nuclear plants to maintain continuously monitored cyber protection plans. Both show that pre-delegated mandatory authority (with federal enforcement teeth) is possible for high-risk, high-urgency environments.
Industry will resist this, as it usually resists mandatory compliance regimes. But the Federal Aviation Authority model works because Congress made the cost of non-compliance exceed the cost of fixing the problem. That liability structure does not yet exist for critical infrastructure operators. Building it is a surmountable policy challenge, and the Federal Aviation Authority and Nuclear Regulatory Commission are evidence that it can be done.
The next 18 months will quickly pass, and the chaotic asymmetry will inevitably arrive. What will America do before then? The scientists of the Manhattan Project tried to set conditions on the bomb’s employment, but were overtaken by the imperatives of the Cold War. Anthropic’s Dario Amodei has read that history closely enough to know how it ends. America’s institutions responsible for acting on it have not yet demonstrated the same.
Naveen Krishnan is a Belfer Young Leaders fellow at the Belfer Center for Science and International Affairs at Harvard, where he specializes in artificial intelligence-enabled offensive cyber capabilities and Chinese state-sponsored advanced persistent threat campaigns against the United States. He is an intelligence officer in the U.S. Navy Reserve, a polyglot, and was a Liu Xiaobo fellow to the U.S. Congressional Commission on China. His views are his own and do not represent those of the U.S. Navy or Harvard.
Image: Midjourney

