The Pentagon wants fewer AI limits. Not anthropic. This is why it matters

The Pentagon wants fewer AI limits. Not anthropic. This is why it matters


Dario AmodeiCEO of Anthropic, will go to the Pentagon on Tuesday to meet with the Secretary of Defense Piet Hegseth about how the military uses the company’s artificial intelligence models. And it will likely be a tense meeting, as sources first said Axios.

Contract discussions between the AI ​​startup and the Ministry of Defense have gone off course in recent weeks Anthropic has insisted on some safeguards for how its technology will be used. While the San Francisco-based company is willing to relax some usage restrictions for the Defense Department, it doesn’t want its models to be used for at least two specific purposes: spying on Americans or developing autonomous weapons.

Heading into Tuesday’s meeting, the two factions appear to have different views on the progress of contract talks. While an Anthropic spokesperson said in a statement Monday that the company is having “productive, good-faith conversations” with the Pentagon, a Defense Department spokesperson said last week that Anthropic’s relationship with the Pentagon is under review.

“Anthropic knows this is not an introductory meeting,” a senior defense official told Axios. “This is not a friendly meeting.”

THE ROLE OF ANTHROPIC IN NATIONAL SECURITY

Anthropic is currently the only AI company available in the military’s secret networks and was among a number of companies awarded a $200 million contract with the Department of Defense to “advance U.S. national security” in July.

The company has repeatedly reiterated its commitment to supporting national security, including on Monday. This was announced in June Claude Gouverneura range of models it built exclusively for US national security customers.

And yet Amodei has been vocal about balancing the opportunities AI offers and the concerns it brings. In one long piece Published last month, Anthropic co-founder warned: “Humanity is on the cusp of almost unimaginable power, and it is very unclear whether our social, political and technological systems have the maturity to wield that power.”

At the India AI Impact Summit last week, Amodei said he is concerned about the autonomous behavior of AI systems and the potential for misuse of AI by individuals and governments.

THE MADURO FACTOR

Another factor that has strained the relationship between Anthropic and the Pentagon came to light last week: Claude was used early this year in the US military’s operation to capture former Venezuelan President Nicolás Maduro. The Wall Street Journal reported. That mission appears to conflict with Anthropic’s usage guidelines, which, among other things, prohibit Claude from being used to incite violence or for criminal justice and surveillance purposes.

The company usage policylast updated in September, is intended to “find an optimal balance between enabling beneficial use and limiting potential harm.”

But Anthropic also notes that the company “may enter into contracts with certain government customers that align use restrictions with that customer’s public mission and legal authorities if, in Anthropic’s judgment, the contractual use restrictions and applicable safeguards are sufficient to mitigate the potential harm.”

POKING THE BEAR

Anthropic has tried to differentiate itself from the rest of the AI ​​developer universe with a “safety first” approach, even taking a swipe at OpenAI’s recent decision to incorporate ads into the ChatGPT platform with a Super Bowl ad.

While Amodei has at times emerged as something of a contrarian, pushing back on the unrestricted use of his Claude AI model for the US military, Amodei is actually poking the bear that is Hegseth.

If Axios Reported last week, Hegseth threatened that the Pentagon could label Anthropic as a “supply chain risk,” which would void its contracts and force other companies working with the Pentagon to declare that they do not use Claude in related workflows.

“Our nation requires that our partners be prepared to help our warfighters win in any battle,” Pentagon chief spokesman Sean Parnell told media last week. “Ultimately, this is about our troops and the safety of the American people.”

#Pentagon #limits #anthropic #matters

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *