בשם יהושוע ✦ Joseph Bae
← All posts
🇰🇷 한국어

🤖 My Anthropic Rollercoaster

I went from dismissing Anthropic as a woke EA vanity project to placing them second only to xAI-and then Trump banned them from every federal agency in America overnight. That is quite a trajectory for a single company to trace in the span of a few months.

Phase One: The Dismissal

Let me be honest about where I started. When Anthropic first crossed my radar, I filed them under “companies I don’t need to take seriously.” The effective altruism branding was a red flag—a secular religion that dresses up utilitarian calculation in the language of moral urgency while its adherents quietly accumulate influence and capital. The founders came out of OpenAI with a lot of philosophical noise about “AI safety” and “responsible scaling,” which to my ear sounded like Silicon Valley progressivism with extra steps.

My read was that they would end up a distant third or fourth behind xAI, Google, OpenAI, and Meta. A niche player for academics and NGOs. I moved on.

Phase Two: I Had to Eat My Words

Then Claude showed up in my actual workflow—and I had to pay attention.

Claude 4.6 Sonnet and Opus genuinely impressed me. Not in a “nice demo” way, but in a “I am using this for real work and it is consistently good” way. Their strategic positioning also started to make more sense: focused primarily on coding and enterprise use cases, not chasing image generation or AGI hype. It was a disciplined product strategy, and it was paying off.

At one point I found myself paying for exactly two AI subscriptions: Claude and Grok. That is the most selective I have ever been, and Anthropic had earned one of those two slots. Their speed of execution matched it-CoWork, memory features, steady model improvements. I placed them firmly in second place behind xAI.

I was genuinely warming to them.

Phase Three: The Fall

Then came the news last night.

Trump posted on Truth Social:

“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military. The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY. Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.”

And Secretary Hegseth followed with his own statement:

“Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, Anthropic and its CEO Dario Amodei have chosen duplicity. Cloaked in the sanctimonious rhetoric of ‘effective altruism,’ they have attempted to strong-arm the United States military into submission—a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.”

That is devastating. Not just politically—strategically, commercially, reputationally.

The “Stolen Land” Problem

What makes this more than a contract dispute is the ideological layer underneath it. The allegation is not merely that Anthropic tried to impose its Terms of Service on the Pentagon—it is that Claude was trained with embedded political assumptions. One of the claims circulating is that the model was trained to treat America as a nation built on “stolen land.”

Think carefully about what that means if true.

If you encode into an AI system the belief that every American is, in some foundational moral sense, a criminal occupying territory they have no right to—you have not built a helpful assistant. You have built a philosophical adversary. The “ethics” of such a system would be structurally hostile to the civilization it serves. That is not a theoretical concern. That is a design feature with consequences.

Effective altruism claims to calculate moral goodness with rigorous neutrality, but it launders specific ideological priors through the language of reason and charity—exactly the syndrome Solzhenitsyn diagnosed in Western elites half a century before Silicon Valley existed. The “stolen land” premise is not a conclusion anyone arrived at through neutral ethical calculation. It is a political position—one that millions of Americans, including most of the people who would actually deploy this AI in defense contexts, find not just wrong but offensive.

When your model’s ethics are downstream of progressive academic ideology, you do not get to act surprised when the government calls it incompatible with national defense.

What I Think Now

I still think the Claude models are technically excellent. That has not changed. But technical excellence bundled with adversarial ideology is not a product I can fully trust—and apparently neither can the U.S. federal government.

Anthropic built something genuinely impressive and then undermined it by trying to impose their worldview on institutions that were paying them for capability, not catechism. There is a six-month phase-out period, a threat of civil and criminal consequences if Anthropic is uncooperative, and a permanent relationship rupture with the Pentagon.

For a company that positioned itself as the “responsible” AI lab—the adults in the room—this is an extraordinary self-own. You do not get to be the responsible choice while strong-arming the military with your terms of service.

Peak Clown World

I have said this before and I will say it again: we are living through a moment where the people who lecture most loudly about safety and ethics tend to be the ones creating the most dangerous ideological situations. Effective altruism is exhibit A-a movement that talks endlessly about existential risk while training AI on premises that existentially delegitimize the civilization it operates in.

It is clown world. The most “safety-conscious” AI lab just got banned from every federal agency in the United States because it tried to override the Commander-in-Chief.

I still use Claude. I still think it is good. But I have adjusted my expectations: technically strong, ideologically compromised, and apparently unwilling to separate those two things when the chips are down.

That is a real shame—and an entirely avoidable one.


Update, 1 March 2026: The situation escalated considerably. Hegseth formally designated Anthropic a Supply-Chain Risk to National Security. I wrote a follow-up covering the democratic control argument, the superweapon paradox, and why I believe AI is the technology of the end times. Read Part Two →

🔮 Preview mode · showing scheduled posts