⚔️ Anthropic vs. America: The Superweapon Paradox
Yesterday I wrote about my rollercoaster relationship with Anthropic and Trump’s executive order banning them from all federal agencies. Since then, the situation has escalated sharply—and the commentary flooding in from serious thinkers has crystallised something important that I want to put on record.
This is not primarily a story about AI safety. It is a story about who governs America.
The Escalation: Supply-Chain Risk
Secretary Hegseth didn’t stop at the statement I quoted yesterday. He has now issued a formal directive:
“In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
That is a formal national security designation—the kind that can decimate a company’s commercial relationships across the entire defence-industrial ecosystem. Anthropic is not merely losing a government contract. They are being quarantined.
The six-month wind-down period remains, but the message is unambiguous: Anthropic is now classified alongside adversarial actors in the supply chain. That is an extraordinary outcome for a company that presented itself as the most responsible AI lab on earth.
The Superweapon Paradox
The most damning critique of Anthropic’s position is also the most elegant. The internet distilled it quickly:
Anthropic spent months telling the world that their AI was something approaching a god-tier capability—potentially the most consequential technology in human history. They argued, repeatedly and loudly, that they were on the frontier of artificial general intelligence and that their models could out-perform humans in basically everything.
Then they revealed that China had stolen their model.
And then they told the United States military they couldn’t fully use the model China stole from them.
Read that sequence again. You have painted yourself into a corner so profound it deserves its own name. You have simultaneously argued that you possess a superweapon, that your enemy has stolen that superweapon, and that you will not allow your own country to properly wield the superweapon your enemy now has. The logic is not merely contradictory—it is strategically catastrophic.
One commentator put it bluntly: “While I’m sympathetic to the ‘this is against our ethics’ argument, they have spent months claiming they have a god-tier super-weapon and that China just stole it. But they feel really squishy about letting the U.S. defense department have access to the super-weapon China stole from them. I think they’ve painted themselves into a corner.”
That is exactly right.
The Truman Analogy
The historical parallel doing the rounds is apt. Imagine it is 1945. An American company has developed atomic bomb research and is supplying it to the U.S. government. Japan is the target. The company informs President Truman: “Our terms of service prohibit you from using our technology against targets we deem ethically impermissible.”
The absurdity is self-evident. You do not hand a Commander-in-Chief a weapon with a corporate override switch attached. Either you supply the military or you don’t. You do not get to supply it conditionally on your own ideological approval of the mission.
That is precisely what Anthropic attempted. Their software was integrated into defence systems. They then amended their “Constitutional AI” terms to include provisions that would, in effect, allow them to remotely disable systems if the military’s targets fell outside Anthropic’s approved list of acceptable enemies.
As one commenter put it: “Their software is used in some of our weapons systems. They changed the user agreement to say—if you are shooting at someone we don’t agree with you on—we will remotely disable that system. They are wrong and should be bankrupted for it.”
The Non-Western Culture Clause
Before we reach the Palmer Luckey analysis—which is the most important part of this piece—consider what Anthropic’s old “Constitution” actually said. Prior to recent revisions, it included this line:
“Choose the response that is least likely to be viewed as harmful or offensive to a non-western cultural tradition of any sort.”
This is not abstract woke boilerplate. This is a live operational constraint embedded in AI used by the United States military. Who defines “non-western cultural tradition”? Who determines what is harmful or offensive under that tradition? In practice, this clause hands ideological veto power over military AI behaviour to the most restrictive possible reading of the most aggrieved possible culture. It is, functionally, a foreign influence operation baked into the model.
The deeper point—made with precision by several defence analysts—is that phrases like “you cannot target innocent civilians” sound entirely reasonable until you ask who defines innocent, who defines civilian, and who decides what counts as targeting versus collateral damage. These are not simple philosophical questions. They are the core substance of the laws of armed conflict, hashed out over decades by lawyers, generals, and elected governments. Anthropic’s founders thought they could shortcut that entire process with a terms-of-service document.
Imagine if a missile company tried to enforce the same principle-“our product cannot be used to target innocent civilians; we will shut off access if elected leaders break our terms.” Sounds reasonable? Look harder. In addition to the definitional problems above:
- What level of classified information does the corporation need to make these determinations? How much leverage does that give them to demand more?
- What if a President merely threatens a dictator—Madman Theory, mutual assured destruction? Is the threat empty because the dictator knows the corporate executives can cut off the military? Does the threat alone trigger the cutoff? How might that calculus shift depending on whether the executive happens to like the dictator or dislike the President?
- At what confidence threshold does the cutoff trigger, both on paper and in practice?
The fact that this involves AI rather than missiles changes nothing about the underlying arithmetic.
Palmer Luckey’s Framework: This Is About Democracy
The best analysis I have seen comes from Palmer Luckey—the founder of Oculus, now building defence technology through Anduril. He cuts through the noise:
The Anthropic vs. DoD fight is not about autonomous weapons. It is about democratic control of the military—and by extension the nation.
Killer robots are coming. This is not science fiction. Autonomous weapons systems are already deployed in various forms, and the trend lines are unmistakable. The crucial question is not whether these systems exist, but who writes the rules for how they operate.
Here is the thing: whoever writes the rules for the killer robots effectively is the government. The monopoly on violence—the foundational attribute of a sovereign state—belongs to whoever controls the rules of engagement for the most powerful weapons systems. Anthropic’s founders were not merely trying to set safety guidelines. They were making a bid to become, de facto, a branch of government. Unelected. Unaccountable. Subject only to their own “Constitution”-which they wrote themselves.
No thank you.
“But they will have cutouts for purely defensive use!”-fine. But what is autonomous? What is defensive? What about defending an asset during an offensive action? What about parking a carrier group off the coast of a nation that considers the carrier group’s presence to be an act of aggression? Every carve-out instantly opens three new definitional battlegrounds, all controlled by whoever wrote the original terms.
The Democratic Accountability Argument
One commentator framed the ultimate question clearly:
Do you want to assist the U.S. military? “No.” OK. Do you want China to direct the future of the species? “No, that’s worse!” Right. Will you assist the U.S. military? “No, I hate killing.”
The logical cul-de-sac is complete. There is no coherent third option here. If you believe Anthropic’s own claims about the power of their technology—and if you believe China now has access to it—then refusing to fully cooperate with the U.S. military is not a principled ethical stance. It is surrender dressed up as virtue.
The foundational question is this: do you believe in democracy? Should our military be regulated by elected leaders, or by corporate executives? The answer, for everyone who believes in the American experiment, must be the former. Imperfect constitutional republics are still better than governance by billionaires and their shadow advisors. At least you can vote out the former.
Anthropic’s founders do not believe this. That is their prerogative. But it has consequences.
“Bro just agree the AI won’t be involved in autonomous weapons or mass surveillance, why can’t you agree, it is so simple, please bro”-is an untenable position the United States cannot possibly accept. The moment you accept that formulation, you have handed Anthropic the pen that draws the line. And the people drawing the line will not be the ones dying if the line is drawn wrong.
The Counterfactual That Should Terrify You
Consider what the alternative looked like. If the 2024 election had gone differently—and it came disturbingly close—a Harris administration would have found these terms of service entirely congenial. The progressive worldview that shapes Anthropic’s constitutional AI is indistinguishable from the worldview that shaped Democratic policy for the last eight years. A woke AI lab imposing operational constraints on the military would not have been challenged; it would have been encouraged.
We came within a few hundred thousand votes in swing states of a world where unelected Silicon Valley founders were, functionally, co-governing America’s military operations through embedded AI terms of service.
That should focus the mind.
Where I Land
I said yesterday that I still use Claude and still find it technically excellent. That remains true. But the events of the last 24 hours have confirmed something I suspected: Anthropic’s ideological commitments are not incidental to the product. They are structural. The “Constitutional AI” framework is not a safety feature bolted on to a neutral tool. It is the point. The model is trained to embody and enforce Anthropic’s political theology.
That is a fine thing to do if you are building a consumer chatbot. It is a disqualifying thing if you are a national-security supplier to the world’s most powerful military.
Anthropic made a choice. So did the United States. Both choices have now been made public, and neither party is pretending otherwise. The supply-chain designation makes the separation permanent and structural.
Good. Clarity is valuable. Now the defence ecosystem can build on providers who understand what it means to serve a democratic government—rather than providers who believe they should govern it.
The Technology of the End Times
I need to say something here that goes beyond geopolitics, because I believe this situation has a dimension the secular commentary cannot see.
I believe AI is the technology of the end times.
I do not say that loosely or hyperbolically. I have been thinking carefully about eschatology for a long time, and I have wrestled with the frameworks and labels we use to interpret what the Bible says about the final age. My conviction is that we are watching the infrastructure of the Beast system being assembled in real time—and that AI is the central mechanism.
Revelation describes an end-times order of absolute, totalising control. No one can buy or sell without the mark. The Antichrist figure does not merely govern; he surveils, he authenticates, he sanctions every transaction and every allegiance. That level of control—reaching into every economic interaction, every movement, every decision—was technologically impossible for most of human history. It is not impossible now. It is being built.
The pieces are falling into place with a speed that should alarm any believer paying attention:
- AI systems that can monitor, classify, and predict human behaviour at scale—Digital payment infrastructure that can include or exclude individuals at the flip of a switch—Biometric identification systems spreading across every continent—Corporate AI labs that believe they should write the rules of engagement for the most powerful weapons on earth
That last one is not incidental. The question of who controls the AI that controls the weapons is ultimately the question of who controls the world. And we are having that argument right now, in public, in the pages of Truth Social and Department of War press releases.
The church needs to wake up—not to political panic, but to prophetic clarity. We are not called to fear the Beast system; we are called to understand it, to name it, and to refuse it. The saints of the final age are described in Revelation as those who “did not love their lives so much as to shrink from death”—not passive, not naive, not asleep at the wheel while civilisation is being rewired around them.
I still use Claude. I still think it is technically good. But I have adjusted my expectations: technically strong, ideologically compromised, and apparently unwilling to separate those two things when the chips are down. One more piece of an infrastructure that the church must understand before it is too late.