In 2025, Anthropic publicly embraced a larger defense footprint by announcing a $200 million deal with the U.S. Department of Defense. Image used for representational purposes. | Photo credit: Reuters
More recently, Anthropic clashed with the Pentagon over security measures to prevent fully autonomous weapons attacks and US domestic surveillance. While Anthropic claimed these are non-negotiable limits, the Pentagon believes commercial AI should be available for “all lawful purposes.” Even more recently, the Pentagon has considered designating Anthropic as a “supply chain risk” – a label that could pressure contractors to declare that they do not use Claude.
Why did Anthropic end up in this situation?
In 2025, the company publicly embraced a larger defense footprint by announcing a $200 million deal with the U.S. Department of Defense. It was a sign that Anthropic wanted to be the lab that said yes to national security while still operating within certain boundaries, while maintaining a public reputation as a company that is part of the military apparatus. Anthropic has also tried to present itself as an enterprise productivity company rather than just a lab with a chatbot. For example, the partnership with Infosys will pair its models with a company that already sells compliance and governance services to heavily regulated industries.
Two ambitions
The reason for this tie is that a company that can claim to operate safely in a government context with strict security expectations can also plausibly sell itself to banks, manufacturers and telecom companies. That is, to governments, Anthropic says: “We will help democratic states maintain a technological advantage, but we will not accept applications such as autonomous targeting or extensive domestic surveillance.” As a result, it says to companies: “We can operationalize groundbreaking AI within environments with strict compliance requirements.”
Unfortunately, these two ambitions have now collided. Anthropic appears to believe that giving in to autonomous targeting and domestic surveillance would destroy the line it has tried to draw with other border labs and new entrants that are also courting defense customers.
However, the Pentagon appears to be indicating that the moral objections of its suppliers are irrelevant, especially if the suppliers are in the defense supply chain.
The business automation layer – that is, the coding and agentic systems that allow Claude to be embedded directly into workflows rather than keeping it as a chatbot that companies use in an ad hoc manner – is still one of Anthropic’s key areas of focus. And the company has also tried to present the security features of its models as an advantage, because the logic seems to be that regulators and companies will favor these models even as competitors develop more powerful alternatives. But this also means that if Anthropic gives in to the Pentagon’s demand, it could lose its signature differentiation, while the Pentagon could make an example of it if it refuses.
The fact is that while Anthropic can attempt to control how Claude is used, its control will weaken once Claude leaves the building. Anthropic can say in the terms of service “you may not use Claude for x” or train the model to deny certain requests, but even then, large customers rarely use an AI model as a standalone chatbot. Instead, they access it through cloud platforms, integrate it into software tools such as for data analytics and automation, and customize it for specific missions. In other words, customers can circumvent the terms of service and the question of Anthropic’s complicity still hangs.
In that sense, Anthropic’s recent decisions are probably coherent. It is likely that a market nervous about AI – for example, governments concerned about adversaries or companies concerned about liability – will pay a premium for a developer who can both deploy and mitigate. And the dispute with the Pentagon is the first major demonstration of what it will cost Anthropic to do both at once.
Published – Feb 22, 2026 01:53 IST
#Anthropic #Security #dilemma

