print-icon
print-icon

Trump Orders Halt To 'Woke' Anthropic Technology Across All Federal Agencies

Tyler Durden's Photo
by Tyler Durden
Friday, Feb 27, 2026 - 09:03 PM

Update (1603ET): An hour before a 5PM ET deadline, President Donald Trump directed 'EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology," posting on Truth Social that "We don't need it, we don't want it, and we will not do business with them again." 

According to Trump:

  • There will be a six month phase out period for agencies already using Anthropic products
  • If Anthropic isn't 'helpful' during the transition, Trump will pursue civil and criminal charges

Defense secretary Pete Hegseth had given Anthropic until 5 p.m. on Friday to allow the Pentagon to use the Claude chatbot without restrictions, within legal limits. 

 

Does this mean they'll be designated a supply chain risk? 

Trump’s decision will send a shockwave through Silicon Valley, where tech firms have invested billions of dollars on artificial intelligence and are weighing how best to handle federal government contracting. The move takes aim at a company that’s leading development of AI, a centerpiece of Trump’s economic agenda.

The stakes are huge for Anthropic, which is valued at $380 billion and has agreed to do about $200 million in work with the military. It’s also a risk for the government given that Anthropic was until recently the only AI system that could operate in the Pentagon’s classified cloud. Its Claude Gov tool is a favored option among defense personnel for its ease of use. -Bloomberg

If so, some ideas can be found here...

*  *  *

As today's 5PM ET deadline looms for Anthropic to militarize Claude AI for the Pentagon, OpenAI CEO Sam Altman waded into the fray - telling staff on Thursday night that his company is working with the Department of War to see if their models can be used in classified settings in a way that maintains the same safety guardrails that are about to get Anthropic booted from the Pentagon. 

"We are going to see if there is a deal with the DoW that allows our models to be deployed in classified environments and that fits with our principles," Altman wrote in a Thursday night note to staff, reported by the Wall Street Journal. "We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons."

Altman says he wants to "try to help de-escalate things," aka - they want to be the ones deeply embedded in the Pentagon's most sensitive systems. 

Red Lines

Altman says OpenAI understands the government's position that a private company should not have control over significant national-security issues [laughs in Palantir], but says they have the same issues as Anthropic when it comes to use cases. 

"We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines," Altman wrote. 

"We believe this dispute isn’t about how AI will be used, but about control. We believe that a private US company cannot be more powerful than the democratically-elected US government, although companies can have lots of input and influence. Democracy is messy, but we are committed to it."

Altman's comments come as things aren't looking so good for Anthropic. Earlier Thursday evening, CEO Dario Amodei announced that the company had rejected the Department of War's demands that it make its technology available for "all lawful uses," which means no mass domestic surveillance or autonomous weapons. 

Yet, the Pentagon's Emil Michael - who allegedly as a private citizen Uber exec wanted to spend a million dollars to surveil and dig up dirt on journalists who covered Uber - noted that mass surveillance is already illegal under the Fourth Amendment, and insists that "Anthropic is lying" because "he @DeptofWar doesn’t do mass surveillance as that is already illegal. What we are talking about is allowing our warfighters to use AI without having to call @DarioAmodei for permission to shoot down an enemy drone swarms that would kill Americans."

Grok On Deck?

The logical move for the Pentagon - after being able to claim they gave Anthropic and OpenAI a fair shake - would be to replace Claude with xAI's Grok

And so, of course, 'ALARMS ARE BEING RAISED' over the prospect, according to the Wall Street Journal, citing the ever-insightful "people familiar with the matter." 

Officials at multiple federal agencies have raised concerns about the safety and reliability of Elon Musk’s xAI artificial-intelligence tools in recent months, highlighting continuing disagreements within the U.S. government about which AI models to deploy, according to people familiar with the matter. 

The warnings preceded the Pentagon’s decision this week to put xAI at the center of some of the nation’s most sensitive and secretive operations by agreeing to allow its chatbot Grok to be used in classified settings.

...

Senior U.S. officials including at the White House view Anthropic’s outspoken stances on safety and ties to big Democratic donors as potentially making the company too “woke” to be a reliable provider, people familiar with the matter said. The looser controls on Grok, and Musk’s absolutist stance on free speech, have made it a more attractive choice to the Pentagon.

...

Ed Forst, the top official at the General Services Administration, a procurement arm of the federal government, in recent months sounded an alarm with White House officials about potential safety issues with Grok, people familiar with the matter said. Other GSA officials under him had also raised safety concerns about Grok, which they viewed as sycophantic and too susceptible to manipulation or corruption by faulty or biased data—creating a potential system risk. 

Also kinda funny is that the General Services Administration was severely diminished by serious DOGE cuts to 'waste, fraud and abuse,' but we're sure this isn't a case of sour grapes. 

Will 'woke' Anthropic & OpenAI win, or will Grok?

Either way, we all lose. Palantir is already balls deep across critical systems, and US adversaries are undoubtedly leveraging cutting edge AI within their own defense departments & surveilling whoever the fuck they want. 

0
Loading...