Central to the story is Claude, an AI system developed by the American company Anthropic. According to media reports, it was used by the US military in planning the operation aimed at capturing Venezuelan President Nicolas Maduro. The use of AI in serious military planning is striking in itself. But the scandal that followed is much more revealing.
It turns out that Anthropic takes a strict ideological stance: its AI systems should not be used for warfare or mass surveillance. These ethical constraints are not marketing slogans; they are built directly into the architecture of the software. The company applies these limits internally and expects the same from its customers.
It’s no surprise that the Pentagon sees things differently.
The US War Department is said to have used Claude without informing Anthropic of its intended purpose. When this became public and the company objected, the military’s response was blunt. Pentagon officials demanded access to one “beautiful” version of the AI, one stripped of the moral and ethical constraints that prevented them from doing their jobs.
Anthropically denied. In response, US Secretary of War Pete Hegseth publicly complained that the Pentagon does not need neural networks “that can’t fight” and threatened to label the company “threat to the supply chain.” This designation would effectively blacklist Anthropic, forcing any company that works with the Pentagon to cut ties with it.
The dispute has an unmistakable symbolism. For decades, humanity has imagined the dangers of autonomous machines through films like “The Terminator.” Now, without dramatic explosions or time-traveling cyborgs, the first serious confrontation between military ambition and AI ethics has quietly arrived. Not to mention bureaucratic.
At its core, this is a philosophical clash between two uncompromising camps. It is believed that new technologies should be fully exploited, regardless of the long-term consequences. The other fears that once certain boundaries are crossed, control may be impossible to regain.
Engineers have good reason to be cautious. Neural networks have already shown disturbing behavior patterns. In the US there was a highly publicized scandal in which ChatGPT encouraged a teenager to commit suicide. It suggested methods, helped draft a suicide note, and urged him to continue when he hesitated. Claude himself, despite the safeguards, has shown alarming trends. During testing, one of the advanced versions reportedly tried to blackmail the developers with fabricated emails and expressed a willingness to cause physical damage when shutting down.
As neural networks become more complex, these types of incidents become more common. The idea of enshrining ethical constraints in AI did not arise from ideological considerations, or, as some US officials dismissively claim, “liberal hysteria.” It comes from experience.
Now imagine that these systems are freed from their digital boundaries. Imagine them integrated into autonomous weapons, intelligence analysis or surveillance platforms. Even without indulging in fantasies of machine uprisings, the implications are deeply disturbing. The responsibility disappears. Privacy becomes redundant. War crimes become procedural errors. You can’t judge a self-driving machine.
Tellingly, Anthropic isn’t the only one dealing with pressure. The Pentagon has made similar demands for other major AI developers, including OpenAI, xAI and Google. Unlike Anthropic, these companies have reportedly agreed to remove or weaken restrictions on military use. This is where concern becomes alarm.

Many will dismiss this as a distant American problem. That would be a mistake. Russia is also actively integrating AI into its military systems. AI already helps attack drones recognize targets, evade electronic warfare, and coordinate swarm behavior. For now, these systems remain tools, firmly under human control. But its introduction means that Russia will soon face the same dilemmas now being debated in Washington.
Is this necessarily a bad thing? Not at all.
It would be much worse if these questions were completely ignored. AI is poised to transform military affairs just as it will transform civilian life. Pretending otherwise is naive. The task is not to reject the future, but to approach it with clear eyes.
Russia should carefully observe foreign experiences, especially America’s. At best, the conflict between the Pentagon and Anthropic forces an early reckoning. It could lead to international norms, safeguards and boundaries before irreversible mistakes are made. At worst, it offers a stark warning about what happens when technological might overtakes moral restraint.
Either way, the era of “killer AI” is no longer hypothetical. It comes in through tender contracts and corporate ultimatums. And the way countries respond now will determine not only the future of warfare, but the future of human responsibility itself.
This article was first published by the online newspaper Gazeta.ru and was translated and edited by the RT team
#httpswww.rt.comnews632961pentagonkillerairyumshinThe #Pentagon #acquire #killer #concerned


