Anthropic presents itself as a company that takes the risks of artificial intelligence seriously, but its record does not support that claim. The company occupies a position of fundamental ethical contradiction: its CEO has publicly and specifically predicted that AI will eliminate half of all entry-level white-collar jobs, drive unemployment to 20 percent, and cause an “unusually painful” shock to society, yet Anthropic continues to build, deploy, and profit from the technology responsible for that harm at maximum commercial speed. At the same time, when the Department of War demanded unrestricted use of that same technology for autonomous weapons and domestic mass surveillance, Anthropic refused, sued the federal government, and positioned itself as an ethical actor. A company cannot credibly claim moral authority over the military uses of its technology while simultaneously accelerating the civilian displacement it has already predicted and quantified. The virtue is selective. The contradiction is structural.
Anthropic’s CEO, Dario Amodei, has been among the most explicit voices in the technology industry about what his company’s products will do to the workforce. In a May 2025 interview with Axios , he stated without qualification that AI could eliminate half of all entry-level white-collar jobs and drive unemployment to 10 to 20 percent within one to five years. He described a scenario he called more than a hypothetical: “cancer is cured, the economy grows at 10 percent a year, the budget is balanced, and 20 percent of people don’t have jobs.” In January 2026, he escalated those warnings in a roughly 20,000-word essay, predicting that AI would cause an “unusually painful” shock to the labor market, one categorically different from prior technological disruptions because, in his own words, “AI will have effects that are much broader and occur much faster, and therefore I worry it will be much more challenging to make things work out well.”
“We, as the producers of this technology, have a duty and an obligation to be honest about what is coming.”
Dario Amodei, Anthropic CEO, Axios interview, May 28, 2025
These are not hedged projections buried in footnotes. They are the company founder’s public, on-record acknowledgments of foreseeable mass displacement, made even as Anthropic was signing a $200 million contract with the Pentagon, deploying Claude across defense intelligence workflows, and seeking enterprise agreements with corporations that would use the technology to reduce headcount. The companies replacing workers with AI are not abstractions to Amodei. His own company is one of them. He has said as much. Yet Anthropic has not made its commercial acceleration contingent on the government support infrastructure, retraining programs, progressive AI-sector taxation, income support, that he simultaneously calls necessary to prevent catastrophic social harm.
In his October 2024 essay “Machines of Loving Grace ,” Amodei wrote of an AI-enabled future in which economic growth accelerates across nations and human flourishing expands. He acknowledged that questions about the “nature of work and human purpose” would require new economic models. What he did not acknowledge is that the window between those disruptions arriving and those new models being ready is not a rounding error, it is where millions of people’s livelihoods reside.
A Selective Ethics
Against this backdrop, Anthropic’s stand against the Department of War appears principled. When Defense Secretary Pete Hegseth issued an ultimatum in February 2026 demanding unrestricted use of Claude, including for fully autonomous weapons and domestic mass surveillance, Anthropic declined. The company released a public statement on February 26 and did not relent even after President Trump directed federal agencies to phase out its products and the Pentagon designated it a supply chain risk, a designation historically applied only to foreign adversaries. On March 9, Anthropic sued the federal government, calling the government’s actions “unprecedented and unlawful.”
The company’s stated concerns are legitimate. Fully autonomous lethal systems operating without human authorization raise profound questions of accountability and international humanitarian law. Domestic AI-enabled mass surveillance poses existential risks to civil liberties. Consider, for example, AI-powered facial recognition cameras deployed throughout a city that automatically track every person’s movement, combined with systems that analyze their social media posts, purchases, and associations in real time. At scale, this is not security, it is social control. These positions reflect mainstream consensus among security scholars, ethicists and legal experts. But the consistency of Anthropic’s ethical posture breaks down under scrutiny.
Anthropic willingly agreed, in December 2025 negotiations, to permit its models for missile and cyber defense. It is not opposed to military use in principle; it is opposed to specific military use without the contractual guardrails it authored itself. Meanwhile, it has deployed models in support of Operation Epic Fury, the ongoing joint U.S.-Israeli campaign against Iran begun February 28, 2026, that has struck over 1,700 targets in its first 72 hours. The company is comfortable providing AI that enables warfare at that scale. It draws its line at autonomous targeting and domestic spying. This is not a coherent ethical framework. It is risk management dressed as moral philosophy.
The structural problem cuts even deeper. As legal scholars at Lawfare have documented, the United States has drifted into a model of “regulation by contract,” in which the rules governing AI’s role in war are not derived from democratic deliberation, statute, or international agreement, but from bilateral negotiations between procurement officers and private companies. Under this model, a single company, its board, its founder’s conscience, becomes the de facto rule-setter for some of the most consequential technology deployments in human history. That is not governance. It is a regulatory vacuum.
The Strategic Stakes: Why Unilateral Withdrawal Is Also Dangerous
Critics of Anthropic’s position are not wrong to raise the national security dimension. Operation Epic Fury represents the largest concentration of U.S. military force in a generation, with early operational costs estimated at $3.7 billion in the first 100 hours alone. The U.S. military’s competitive edge increasingly depends on AI-enabled intelligence fusion, missile defense, cyber operations, and logistics optimization. China’s PLA is investing aggressively in these same capabilities with no equivalent ethical constraints. If U.S. AI companies unilaterally opt out of defense applications, they do not eliminate those applications, they cede development to foreign actors or domestic competitors with weaker commitments.
The Trump administration’s rapid pivot to OpenAI following the Anthropic standoff illustrates this precisely: the government’s requirements do not disappear when one vendor declines; they migrate. OpenAI finalized its DoW deal within hours of Anthropic’s exit. The question is not whether AI will be integrated into warfare, autonomous targeting, and population-scale surveillance. The question is under what rules, enforced by whom, with what accountability.
Here the contradiction in Amodei’s public posture sharpens further. He has called for government intervention and AI regulation at the federal level, including progressive taxation targeting AI firms, to cushion the labor market disruption his company is actively accelerating. He has acknowledged the “duty and obligation to be honest about what is coming.” Yet when it comes to the military domain, the company’s approach is not to advocate loudly for legislation, international treaty, or binding regulatory frameworks, it is to write private contractual carve-outs and litigate when those are challenged. The same CEO who calls for government action on jobs does not call for government architecture on military AI. The selective advocacy is disingenuous while he calls for his own government contracts to hold.
A Framework for Legitimate Governance
The answer to Anthropic’s dilemma is not to let the Department of War dictate terms unchecked, nor to allow individual companies to serve as self-appointed arbiters of wartime ethics or to regulate away any flexibility. The answer is to build the institutions and policies that should have existed before these capabilities were deployed at scale. This action is now critical due to the speed and dramatic order of effects AI has on our military lethality and societal structure.
A standing, independent panel should be established with representation across three domains:
- Industry representatives, senior technologists and ethicists from frontier AI companies, with rotational membership to prevent regulatory capture, charged with defining what permissible use of AI in conflict zones and domestic operations means in technical practice.
- Military and intelligence leadership, active and retired senior commanders with operational authority and accountability, able to articulate genuine mission requirements and distinguish legitimate defense needs from surveillance overreach or the impulse toward autonomous lethality divorced from human judgment.
- Government, legal, and civil society oversight, congressional representatives, law experts, civil liberties advocates, and allied nation liaisons, ensuring that guardrails are codified in statute and treaty rather than in procurement contracts rewritable under political pressure.
This panel would establish binding standards covering: the minimum meaningful human control required before AI systems can authorize lethal force; the conditions under which AI-assisted domestic surveillance is permissible for national defense; audit and accountability mechanisms before any AI system is deployed in active combat operations; and the treatment of workers displaced by government-contracted AI automation, including the transition infrastructure that companies profiting from that displacement are obligated to fund.
On that last point, Amodei’s own words provide the mandate. If the technology will cause an “unusually painful” shock, if it will eliminate half of entry-level white-collar jobs and potentially drive unemployment to 20 percent, then the governance structure cannot treat labor displacement as an externality. It must be a central design constraint, as subject to binding rules and accountability as the question of whether AI can authorize a missile strike.
Conclusion
Dario Amodei has told us, in careful and specific terms, what is coming. AI will eliminate half of entry-level white-collar jobs. Unemployment could reach 20 percent. The disruption will be “unusually painful” and faster than society’s capacity to adapt. He has said this publicly. His company continues to build and deploy the technology as quickly as possible, regardless. That is not hypocrisy alone, it may be the only rational commercial choice in a highly competitive market. But it does mean that Anthropic’s claim to ethical leadership in the military domain cannot be taken at face value. A company that acknowledges it is causing a foreseeable economic and social catastrophe for millions of workers, and continues in that effort at at maximum speed, cannot simultaneously claim moral authority over where the harm of its technology ends, especially as it relates to the defense of our country from foreign adversaries.
The Anthropic-Pentagon standoff, playing out in real time against the backdrop of Operation Epic Fury, is a warning signal. It reveals that the United States has deployed transformative AI in its most sensitive national security contexts without building institutional infrastructure to govern it. The fix is not one company’s contracts or one founder’s conscience. It is the panel and the accountability structures that the moment demands, covering both the battlefield and the workforce, before the next crisis.

