Anthropic is already at war

Pentagon
The Pentagon, Arlington, Virginia. Anthropic’s AI models were embedded in CENTCOM’s targeting infrastructure before the first bomb fell on Iran.

The U.S. military did not drop a single bomb on Iran without first running the target through an algorithm.

That is not a metaphor. When the Pentagon launched Operation Epic Fury on Feb. 28, 2026, the strikes against Iranian military and leadership sites were processed, cross-referenced, and justified by artificial intelligence systems leased from Silicon Valley corporations. 

According to the Washington Post, on the first day of operations alone, Anthropic’s Claude generated approximately 1,000 prioritized targets, synthesizing satellite imagery, signals intelligence, and surveillance feeds in real time to produce target lists complete with GPS coordinates, weapons recommendations, and automated legal justifications for each strike. 

The same technology powering consumer chatbots is now embedded in CENTCOM’s targeting infrastructure, handing commanders a tidy summary: Here is the target, here is the risk, here is the logical case for the strike.

By March 4, 2026, the death toll in Iran had reached at least 1,230, according to the official Foundation of Martyrs and Veterans Affairs.

Among the dead were 165 students and staff at the Shajareh Tayyebeh girls’ elementary school in the southern city of Minab, struck on Feb. 28 — the first day of the war — by what Middle East Eye reported was a double-tap strike: two missiles, with the second hitting survivors who had been moved to the school’s prayer hall. Al Jazeera’s Digital Investigations Unit independently geolocated the strike using satellite imagery and video footage.

The machine does not pull the trigger. A human does — which is precisely how the Pentagon wants it framed. But by the time a commander authorizes a strike, the AI has already read the intercepts, identified the target, assessed the collateral damage risk, and generated the recommendation. The human signature at the end of that chain is not deliberation. It is approval.

The Pentagon built this

The news media tells this story in one direction: the military is reaching into Silicon Valley, recruiting civilian technology for the battlefield. That framing gets the history exactly backwards.

Artificial intelligence was a military project before it was anything else. DARPA — the Pentagon’s Defense Advanced Research Projects Agency — funded the foundational AI research at MIT, Carnegie Mellon, and Stanford from the 1950s onward. The internet began as ARPANET, a Pentagon communications system designed to survive nuclear war. GPS was a U.S. Air Force project. The neural network research underlying every major AI model today came out of university labs on federal grants, developed by researchers working inside the defense apparatus.

This is not a story about public investment being stolen by private companies. The imperialist state and monopoly capital do not stand in opposition to each other — they function as a single system. DARPA did not fund AI research as a public service that corporations later hijacked. 

It funded the research as the state apparatus of monopoly capital, directing resources toward the technological needs of empire. When those investments matured, the monopolies commercialized them — not by taking something that belonged elsewhere, but by completing the circuit. DARPA spent decades funding research with no guaranteed payoff, at universities, with no return expected on any single investment — because no private company would absorb that risk or wait that long. 

When the research paid off, Google, Microsoft, and their rivals were there to patent the results, hire the researchers, and sell the technology back as a product. The state took the risk. The corporations took the profit. That is the arrangement, not a deviation from it.

So when Anthropic’s Dario Amodei talks about his company’s “mission,” or when OpenAI’s Sam Altman invokes the “benefit of humanity,” they are describing what they built on top of that foundation — a commercial layer over a weapons research lineage, rebranded for consumer markets. Google is not Google without DARPA. Anthropic is not Anthropic without the scientific infrastructure the Pentagon built.

The Pentagon is not now reaching into Silicon Valley. It is asking for its technology back — on its terms.

Not a dissenter

News coverage of the dispute between the Trump administration and Anthropic has left many readers with a misleading impression: that Anthropic is resisting being drawn into military operations, that its “Constitutional AI” principles represent a refusal to participate in war.

That is not what is happening.

Anthropic’s Claude models are already so deeply embedded in CENTCOM’s targeting and battle simulation infrastructure that the Pentagon’s own estimate is that it needs at least six months to transition away from them. You do not get that embedded by accident. Anthropic signed the contracts. 

In November 2024, it partnered with Palantir and Amazon Web Services to integrate Claude into classified military networks, followed by the launch of “Claude Gov” for national security agencies in June 2025. Palantir’s Maven Smart System — the Pentagon’s flagship AI warfare program, now operating under a contract worth nearly $1.3 billion and serving over 25,000 users across every U.S. Combatant Command — is the platform Claude runs on. 

When Operation Epic Fury struck Iran, those strikes were planned and justified with Anthropic’s systems running in the background. When the January 2026 raid captured Venezuelan President Nicolás Maduro, Claude was in the loop.

Amodei confirmed this himself. In a public statement on Feb. 27, he wrote: “Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations.” He listed what Claude is actually being used for: “intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.” His stated restrictions were never directed at any of those functions. They were directed at two narrow carve-outs — mass domestic surveillance and fully autonomous weapons — while everything else was explicitly endorsed.

Anthropic is not resisting war. It is already conducting it. The dispute with the Pentagon is about the terms — specifically, whether Anthropic’s restrictions on bulk domestic surveillance and fully autonomous targeting can be waived. That is a narrower argument than the press coverage suggests, and framing it as corporate conscience versus military necessity flatters Anthropic considerably more than the facts warrant.

Anthropic says its “Constitutional AI” framework places restrictions on certain uses. Under it, Anthropic refuses to allow its models to analyze bulk data on U.S. civilians — GPS locations, credit card transactions, search histories. It refuses to enable fully autonomous lethal systems with no human authorization. The Pentagon, under Defense Secretary Pete Hegseth, has pushed to eliminate them entirely, arguing that private companies cannot use “ideological whims” to constrain military readiness.

But those limits coexist with full participation in targeting, strike planning, and battle simulation against foreign populations. Anthropic has drawn a line — and it is considerably further into war-making than its public reputation suggests.

The shakedown

When the Trump administration designated Anthropic a “supply chain risk,” it framed the decision as a national security judgment. The timeline invites a different reading.

Trump signed the executive order banning Anthropic from federal systems on a Friday. Hours later — the same night, before the weekend was out — OpenAI CEO Sam Altman announced on X that his company had reached a classified network agreement with the Pentagon. Hours after that, U.S. and Israeli forces launched strikes on Iran. The sequence was not subtle.

Altman subsequently admitted he had moved too fast. “We were genuinely trying to de-escalate things and avoid a much worse outcome,” he wrote in a follow-up post, “but I think it just looked opportunistic and sloppy.” The deal had to be revised days later after legal analysts identified surveillance loopholes in the original contract language.

The substantive case against Anthropic evaporates under scrutiny. The Pentagon had spent months insisting Anthropic’s red lines — no mass domestic surveillance, no fully autonomous weapons — were ideological overreach, “woke” interference with military readiness. Trump himself posted on Truth Social that the government would not be dictated to by “some out-of-control, Radical Left AI company.” A senior Pentagon official told Axios: “The problem with Dario is, with him, it’s ideological. We know who we’re dealing with.”

Then OpenAI got a deal that included the same prohibitions — no mass domestic surveillance, no fully autonomous weapons. The Pentagon accepted them. Charlie Bullock, senior research fellow at the Institute for Law and AI, put the question plainly: “I am confused about why the Pentagon would accept this language when they just tried to nuke Anthropic for asking for something very similar to this.”

No one in the administration has answered that question. Altman’s own explanation was that Anthropic “seemed more focused on specific prohibitions in the contract, rather than citing applicable laws” — a distinction that, charitably, is technical, and uncharitably, is a post-hoc rationalization for a decision already made on other grounds. At an internal all-hands meeting, Altman told his own employees what the deal actually meant: “You do not get to make operational decisions.”

This administration has a pattern. It designates a target publicly, threatens to destroy its business through regulatory or contractual action, and then resolves the dispute once the target pays up or falls in line. In January 2025, a full year before Anthropic was banned from federal systems, Trump stood alongside Altman, SoftBank, and Oracle at the White House to announce Stargate — a $500 billion AI infrastructure initiative with OpenAI at its center, a very public political alliance cemented before the cameras. Elon Musk’s xAI, whose owner sits inside the administration, was fast-tracked into classified contracts on the strength of being more “patriotic.”

The question the coverage has not fully pressed is whether the Anthropic designation was a national security decision or a negotiating position — and whether a sufficiently large concession, financial or political, could have resolved it just as quickly.

The answer may lie in who has been running the Pentagon’s procurement decisions. The Quincy Institute for Responsible Statecraft has documented that Silicon Valley tech companies and the venture capital firms behind them played a direct role in vetting candidates for Pentagon positions under Trump. Vice President J.D. Vance has close ties to Palantir founder Peter Thiel. Palantir is the company that brokered Anthropic’s classified network integration in the first place — and is now positioning itself as a central node in whatever AI architecture replaces it.

At least 50 former Pentagon officials have passed through the new revolving door into military-related venture capital and private equity firms since 2019, according to Roberto González, a cultural anthropologist at San José State University, whose 2024 report for the Costs of War Project at Brown University documented the transformation of the military-industrial complex. They leverage their connections with current officials to steer contracts toward firms in their investment portfolios. This is not corruption in the old sense — a brown envelope passed under a table. It is the institutional structure of monopoly capital operating as designed.

Anthropic has no comparable patron inside the administration. It made no equivalent of the Stargate deal. Whether that reflects principle or the absence of an offer worth taking is something only Dario Amodei, Anthropic’s CEO, and his sister Daniela Amodei, its president, know.

The logic of the machine

AI does not make war more precise. It makes war faster, and it makes the justifications for war sound more rigorous than they are.

The three functions Pentagon planners rely on most are intelligence assessment — synthesizing intercepts and satellite data into threat evaluations — target identification, and battle simulation, which models strike sequences and predicts collateral damage and escalation risks. These functions are genuinely useful for compressing hours of analysis into seconds. They are also genuinely dangerous for exactly the same reason.

González also warned that aggressive Silicon Valley business models — built on moving fast and disrupting existing markets — are driving the development of weapons systems that are inadequately tested and algorithmically flawed.

The template for what this looks like in practice was established in Gaza. As 972 Magazine documented, Israel’s “Lavender” AI system flagged approximately 37,000 Palestinians for assassination. The systematic replacement of human target selection with algorithmic target generation — with humans reduced to rubber-stamping the output — is now being deployed at scale against Iran, with Claude generating hundreds of targets daily. As The New Republic observed, meaningful human control becomes a bureaucratic fiction when hundreds of AI-generated targets are processed daily with inconsistent verification across military units.

A February 2026 study by Professor Kenneth Payne of King’s College London’s Defence Studies Department tested three frontier AI models — GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash — across 21 simulated nuclear crisis scenarios. In 95% of games, both sides engaged in mutual nuclear signaling. More striking: not one model ever chose accommodation or concession, even when losing. The most de-escalatory move any model made — selecting “Return to Start Line” — occurred less than 7% of the time. All three models consistently treated nuclear weapons as tools of coercion rather than instruments of last resort. As Payne put it, the nuclear taboo that has held among human decision-makers since 1945 was simply no impediment to these systems.

One of Claude Sonnet 4’s outputs is in that dataset. The model the Pentagon is supposedly phasing out for being insufficiently “patriotic” was, in simulation, perfectly willing to reach for nuclear coercion.

The Pentagon’s “human-in-the-loop” requirement — the legal insistence that a human must authorize every lethal strike — is less a safeguard than a formality. When the logic of a kill decision arrives pre-processed and pre-justified by a machine trained on military doctrine, the human at the end of the chain is ratifying a recommendation, not deliberating independently. The accountability is preserved on paper. The deliberation has already happened inside the model.

War contracts and a burning balance sheet

These companies are not in the weapons business because they believe in it. They are in it because they are losing money at a staggering rate and the Pentagon never stops paying its bills — and the scale of what it pays is staggering.

González’s report found that between 2018 and 2022, U.S. military and intelligence agencies awarded at least $28 billion in contracts to Microsoft, Amazon, and Google alone. The five largest military contracts to major tech firms over that same period had ceilings totaling at least $53 billion. Venture capital poured another $100 billion into defense technology startups between 2021 and 2023. Many of the largest contracts are classified and withheld from public procurement databases — so the real figures are almost certainly higher.

The AI industry has borrowed trillions of dollars to build data centers it cannot yet make profitable. A late-2025 J.P. Morgan market study projected that $1.5 trillion in investment-grade bonds would be issued across the broader market to fund AI data centers. Across that bond market, AI debt now accounts for 15–20 cents of every dollar lent to major corporations. To put that in perspective: the subprime mortgage debt that collapsed the global economy in 2008 made up less than 10 cents on the dollar when it blew up. The AI debt hole is already bigger.

The reason is simple: the data centers at the heart of the AI industry lose money. According to Harris Kupperman, founder of hedge fund Praetorian, a data center built in 2025 costs $40 billion a year just in wear and tear on equipment — but brings in only $15 to $20 billion in revenue. Meta found that the specialized chips these centers run on break down at a rate of 9% per year, meaning a center loses more than a quarter of its capacity within three years. Investor Michael Burry has pointed out that the companies running these centers hide that reality by claiming their hardware lasts five years. It does not.

So where does the money come from to keep the lights on? Two places: Pentagon contracts and the elimination of workers’ jobs. Both strategies serve the same purpose — finding a route to profit before the whole thing collapses.

The military contract is attractive precisely because the government pays reliably, at scale, with no questions asked about whether the product works. And at home, the same AI tools being sold to CENTCOM to process targeting data are being sold to law firms, banks, and logistics companies to eliminate the jobs of paralegals, analysts, and coordinators. Skilled workers who spent years building expertise are watching it automated away. Lower-wage workers, not yet in the crosshairs, are seeing their pay stagnate as the threat of replacement is used to suppress any demand for raises.

The wealth generated by all of this flows to the people who own the infrastructure. That is the arrangement the Pentagon’s AI contracts are propping up.

Who builds it and who dies

The companies competing to replace Anthropic in the Pentagon’s classified networks are not offering better technology or better ethics. They are offering fewer questions. OpenAI and xAI are not declining to build autonomous targeting tools — they are bidding for the contract to build them sooner.

Workers have no say in how these systems are deployed against them at home or abroad — not because they weren’t invited to the meeting, but because they own none of the infrastructure and hold none of the power. The populations of West Asia whose cities are mapped, whose leaders are tracked, and whose infrastructure is targeted by these algorithms are not stakeholders to be consulted. They are targets of empire.

Operation Epic Fury was sold to the public as a precision campaign, surgical and justified by intelligence. What it actually was: Corporate AI tools, leased at massive cost, embedded in military command structures by private contracts, producing essed the targets. Now it is negotiating the terms of its continued participation while the nlogical-sounding justifications for strikes that killed schoolchildren in Minab and generated instability that will last for years. A University of Maryland poll found only 21% of U.S. residents favored the attack on Iran. A YouGov survey recorded 34% approval — the lowest for any U.S. military action in modern history.

Anthropic built the tools, signed the contracts, and procews media casts it as a dissenter. OpenAI and xAI are competing to take its place by promising to ask even fewer questions.

None of them are reluctant. The machine is already running.


Join the Struggle-La Lucha Telegram channel