How Project Maven became central to America’s AI-powered warfare

How Project Maven became central to America’s AI-powered warfare

7 minutes, 3 seconds Read

AI is increasingly being used by the US military – and Project Maven is at the heart of it.

An investigation by The independent and conflict monitoring group Airwars has found that Abdul-Rahman al-Rawi, a 20-year-old student, is the first civilian killed in a series of airstrikes recognized as being carried out with AI assistance.

Weeks after the attacks in Iraq in early February 2024, a senior US official boasted about using AI to help identify the targets of these attacks – but US Central Command later said it “did not know” whether AI was involved.

AI in warfare has become an increasingly pressing problem.

Deadly U.S. strikes across Iran that have killed hundreds of people over the past week reportedly used Palantir’s Maven Smart System (MSS) to identify targets, a broader AI-based warfare decision support system that typically integrates Project Maven.

U.S. military officials said last week that U.S. forces are likely responsible for an attack on a girls' school that Iranian authorities say killed more than 165 people.

U.S. military officials said last week that U.S. forces are likely responsible for an attack on a girls’ school that Iranian authorities say killed more than 165 people. (News agency Mehr)

That the US may not record its use of AI in individual airstrikes raises questions about responsibility in Iran, where mounting evidence points to US responsibility in the Minab school attack, which authorities say killed more than 165 people, most of whom were students.

The bombing campaign is so intense that the US and Israel said they had hit more targets in Iran in the first 100 hours than in the first six months of the US-led coalition against ISIS, an Airwars analysis found.

“A state has a responsibility to know whether it has used AI in one of its attacks,” says Jessica Dorsey, professor of international law and specialized in AI warfare at Utrecht University.

“Commanders must have access to the intelligence on which their attacks are based so they can directly interrogate the target and ensure positive identification.”

The independent and Airwars look at what Project Maven really is – and why some experts are so concerned about where AI warfare might be heading.

What is Project Maven?

Created in 2017 by the Pentagon, the Algorithmic Warfare Cross Functional Team, better known as Project Maven, was acquired by the National Geospatial Agency (NGA) and uses computer vision algorithms to locate and identify targets based on satellite imagery, video and radar to detect movement and track targets.

Project Maven was first widely deployed after Russia’s invasion of Ukraine in 2022, albeit with a basic version for Ukrainian forces to help identify Russian military vehicles, people and buildings.

The attack targeted a car that Abdul-Rahman was standing nearby

The attack targeted a car that Abdul-Rahman was standing nearby (Anmar al-Rawi)

However, Maven produced mixed results. Snow, dense foliage, and decoys have been known to hinder its abilities. And in desert terrain like western Iraq, where weather conditions can abruptly change a landscape, Maven’s accuracy can drop below 30 percent, U.S. officials said. Bloomberg.

Maven is now available to all U.S. services and combatant commands, and since the attacks in 2024, its user base has more than quadrupled, then-NGA director Rear Adm. Frank Whitworth said in a speech last year.

It is currently capable of making a thousand target recommendations in an hour, “by choosing and rejecting targets on the battlefield,” he explained.

A month later, Whitworth acknowledged that the NGA was using artificial intelligence so routinely that it created a standardized disclosure for AI-generated information products: “We want to use it for everything, not just targeting.”

Project Maven is typically integrated into the broader Maven Smart System, an AI-enabled warfighting system, to accelerate U.S. military targeting decisions.

Palantir’s MSS, which uses Anthropic’s Claude AI, is currently being deployed by the US to assist in targeting Iran.

MSS collects all data from satellites, drones, intelligence reports and radar signals. Claude from Anthropic then analyzes this data to make targeted recommendations and suggestions for the type of force to use.

The use of Maven is growing, as is the dissatisfaction against it

“We will become an AI-first war power in all domains,” US Defense Secretary Pete Hegseth declared in January, pledging to “unleash experimentation” and “remove bureaucratic barriers.”

After the attacks on Iraq in 2024, Schuyler Moore, chief technology officer at US Central Command, said Bloomberg that the “benefit you get from algorithms is speed.”

However, with the speed, there have been increasing concerns about the humans involved in decision-making doing little more than rubber-stamp recommendations from AI.

Abdul-Rahman, 20, was killed in February 2024 by a US airstrike on Al-Qaim, Iraq. He is the first civilian to die in an attack that recognized the use of AI-assisted targeting

Abdul-Rahman, 20, was killed in February 2024 by a US airstrike on Al-Qaim, Iraq. He is the first civilian to die in an attack that recognized the use of AI-assisted targeting (Anmar al-Rawi)

A group of experts warned in an April 2025 submission to the UN that current frameworks fail to address the “profound risks” that AI-assisted targeting like Project Maven poses to international humanitarian law and human judgment in targeting.

These concerns are echoed by technology workers who oppose their companies’ involvement in AI initiatives for warfare.

Initially a major player in Project Maven, protests and resignations by Google employees against the company’s involvement in artificial intelligence for lethal purposes caused the company to leave the project.

Palantir stepped in to fill the void, internally dubbing the project “Tron,” after the 1982 film in which a computer engineer is transported into the digital world.

Revelations that Claude AI was used in the US attack on Venezuela in January raised tensions between its creator, Anthropic, and the War Department.

Anthropic will not allow its AI systems to be used for mass domestic surveillance or fully autonomous weapons, and has rejected pressure to withdraw.

Pete Hegseth encourages the use of AI in the US military

Pete Hegseth encourages the use of AI in the US military (Copyright 2025 The Associated Press. All rights reserved.)

In a punitive measure, the Pentagon designated Anthropic on March 5 as a “supply chain risk” with major consequences for the company.

“America’s warriors will never be held hostage by unelected tech executives and the ideology of Silicon Valley. We will decide, we will dominate and we will win,” said Pentagon Press Secretary Kingsley Wilson.

Why are experts so concerned?

Speak with The independentProf. Dorsey and Dr Elke Schwarz, who also specializes at the London School of Economics, have raised several concerns. Both were among experts who warned last year about the risks of AI-assisted targeting.

At the heart of this were two crucial issues: algorithmic bias and reducing people skills.

“The criterion that the US has used in the past is ‘military-age male’. You can’t just kill military-age males,” said Prof. Dorsey.

“And maybe they have programmed into a computer vision algorithm something like carrying a gun. But carrying a gun is not something that should sentence you to death.”

“If you don’t have enough accurate, reliable or timely data, your system becomes vulnerable and flawed, and that in itself has the potential for damage. The big challenge is really prioritizing speed and scale,” said Dr. Schwarz.

Anthropic is embroiled in a dispute with the Department of Defense over its use of AI

Anthropic is embroiled in a dispute with the Department of Defense over its use of AI (AP)

“Speed ​​and scale are critical in these types of systems, and that accelerates the chain of action. That’s the appeal, that’s the seductive part of the system.”

Israel’s offensive in Gaza included an AI-enabled target creation platform called “the Gospel,” which produces potential targets so quickly that some Israeli officers have likened it to a “mass murder factory.”

Another Israeli AI-powered target identification tool, called Lavender, at one point identified 37,000 potential targets based on their apparent ties to Hamas and Islamic Jihad.

An Israeli intelligence source said this The Guardian the role of people overseeing Lavender’s target selection was minimal: “I would invest twenty seconds for each target at this stage, and do tens of seconds every day. I had no added value as a human, other than being a stamp of approval.”

Professor Dorsey also warned of the risk of “automation bias”, where people come to trust the computer’s output without critically assessing the target itself.

As militaries increasingly rely on AI-assisted targeting, she argues that personnel will begin to hand over their own responsibilities to the machines. “We are reducing our skills. Commanders are becoming less and less good at identifying what they are responsible for on a battlefield.”

“People tend not to question decisions made based on computational output,” added Dr. Schwarz added.

#Project #Maven #central #Americas #AIpowered #warfare

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *