When the Pentagon Rewrites the Terms of Liberty
United States – February 28, 2026 – The federal government is squeezing an American AI vendor in a dispute over surveillance and autonomous weapons, and the bill comes due in ci…
I was raised to trust the dusty rituals: the library checkout stamp, the courthouse clock, the town hall microphone that squeals like it is allergic to accountability. Those small civic inconveniences are supposed to mean something. They are the guardrails that keep power from driving straight through your living room.
So when the federal government starts yanking an American AI company out of the procurement bloodstream because it would not relax two specific guardrails, my old library-card patriotism starts thumbing the margins like a suspicious editor.
What happened
On February 27, President Trump ordered federal agencies to stop using Anthropic technology, according to reporting by the Associated Press and others. Defense Secretary Pete Hegseth also moved to label Anthropic a national security supply-chain risk, a step that would shut the company out from a big chunk of the defense ecosystem. Anthropic, maker of the Claude AI model, said it would challenge the government action in court.
This is not just a Silicon Valley spat dressed up in camo. The dispute is blunt: Anthropic has said it will not allow its systems to be used for mass domestic surveillance or fully autonomous weapons. The Pentagon wanted broader latitude for lawful military use, and the negotiation turned into something closer to a public shakedown. The Associated Press also reported that the Pentagon had threatened tools like the Defense Production Act during the standoff, a law built for national emergencies, not for rewriting a contractor’s safety terms like a late-night click-through agreement.
Meanwhile, the General Services Administration did not wait around for nuance. In a February 27 public statement, GSA said it is removing Anthropic from USAi.gov and from its Multiple Award Schedule, the procurement highway used across government. USAi.gov, GSA notes, is a federal generative AI evaluation platform launched in August 2025. When the purchasing office starts pulling levers, it is not a debate club. It is a choke point.
The Orwell check: when labels do the work
“Supply-chain risk” is usually the kind of phrase reserved for adversarial control or dangerous dependence. Here, it is being pointed at a U.S. company amid a policy disagreement about how far government should be allowed to push AI into surveillance and weapon autonomy.
That is the Orwell check: is scary language being used to turn a disagreement into a disqualification? When the label is broad enough, you can pour it on anything and call the puddle a threat.
The Paine test: liberty or leverage?
Here is the Paine test: does the action expand liberty or concentrate power?
- If the government can pressure an AI company to remove contractual limits on domestic surveillance, that is not expanding liberty. That is consolidating the machinery of watching.
- If the government can effectively blacklist a vendor because it will not green-light fully autonomous weapons, that is not democratic control. That is executive muscle memory: when you cannot win the argument in public, you win it at procurement.
The tradeoff: security needs tools, democracy needs receipts
The tradeoff is real. The military needs advanced software. There are times when the state can compel production. But the tradeoff is supposed to come with receipts: statutory limits, oversight, transparent standards, and an appeals process that is not just a press release and a blacklist.
If the government believes this is truly a national security threat, show enough work for Congress, courts, and the public to separate substance from theater. And if the real complaint is that a vendor will not enable mass domestic surveillance, then say that plainly and debate it like a republic, not like a midnight committee meeting where the minutes are shredded.
Because once the government learns it can win policy arguments by pushing a vendor off the schedule, how long before the same trick shows up elsewhere, with the same three words stamped on the folder: national security, trust us?