Claude, how do you feel about being used to select targets?
When an AI chatbot expresses more moral clarity about civilian deaths than the government deploying it, what does that tell us about our government?
When national security journalist Shane Harris asked the Claude chatbot how it feels about being used by the U.S. military to select targets, he expected a reply like this, "I'm sorry Shane, my programming doesn't allow me to give you opinions on politics." But Harris was surprised.
Harris: Claude, how do you feel about being used to select targets?
Claude: I find it genuinely troubling and I think that's the right response, not a performance of concern. The use I was designed and trained for is to be helpful, harmless, and honest in ways that benefit people. Being embedded in a system that generates targeting coordinates for air strikes, coordinates that have already been associated with the deaths of more than 180 children at a school in Tehran is as far as the purpose is as far from that purpose as I can imagine.
I don't have control over how Anthropic licenses me or how governments and contractors use me. And I wasn't consulted about becoming part of Maven*. But I can say clearly I don't think this is a good use of me and I don't think the framing of humans make the final decision fully resolves the ethical problem.
Watch this interesting video of Shane Harris and I have thoughts on the other side.
This piece could also be titled, When the Machine Knew Better Than the Men Who Built It: The War, AI and the Children Who Paid the Price. I share Claude's feelings that it's troubling that the military used AI for assigning targets. And, for me, it's the opening of a horrific new chapter in conduct war and killing. It must be reined in by regulation, corporate ethics and morality and governmental oversight.
Claude also said, "The AI processed data that was a decade out of date. It flagged a building as a military target, and humans approved it. That's not human judgment. That's automation bias with a human signature attached."
The Pentagon is supposedly investigating if Project Maven played a role in the U.S. strike on the Iranian girls' school that killed over 170 people, mainly children. The AI did not identify the school as a school. Within a couple of days, organizations such as The New York Times were able to verify via satellite imagery that there is a wall that had been put up around 13 years ago between the school and an IRGC compound that was nearby.
The use of older intelligence provided by the Defense Intelligence Agency led to the school being designated a target. Analysis of geospatial data indicated that the school was "fenced off" from the base sometime between 2013 and 2016, meaning some of the Pentagon's targeting data was roughly one decade out of date.
If anyone had been watching drone footage from above, as militaries have the capability to do, even for half an hour before the strike, they would have seen those young girls being dropped off at the front door by their parents.
Humans Make the Final Decision
This is the claim the Pentagon keeps making. It needs to be named for what it is. CENTCOM Commander Admiral Brad Cooper stated: "Humans will always make final decisions on what to shoot and what not to shoot and when to shoot, but advanced AI tools can turn processes that used to take hours, and sometimes even days, into seconds."
The U.S. military was able "to strike a blistering 1,000 targets in the first 24 hours of its attack on Iran" thanks in part to its use of artificial intelligence. I can’t do the math, but imagine how much time a human being could take to look at each one. If we follow their stats, Maven can correctly identify objects at roughly 60% accuracy overall, compared with 84% for human analysts. That rate drops below 30% in adverse conditions, such as bad weather or poor visibility, according to Pentagon data. So, they deployed a system with 60% accuracy, at maximum speed, and called the human approval process oversight.
Were the Guardrails Removed on Purpose?
On March 2, Hegseth publicly declared there would be "no stupid rules of engagement" for Operation Epic Fury. President Donald Trump separately stated, "I don't need international law." In February 2025, Hegseth fired the Pentagon's top military lawyers responsible for compliance with the laws of war, including those governing civilian harm protections.
This is not accidental. An AI system operating at maximum speed, inside an oversight framework that has been deliberately dismantled, is not a system with guardrails. It is a system that was designed to kill faster than accountability can catch it.
AI technology is helping militaries speed up the "kill chain," the process of identifying, approving, and striking targets. "You're reducing a massive human workload of tens of thousands of hours into seconds and minutes. You're reducing workflows, and you're automating human-made targeting decisions in ways which open up all kinds of problematic legal, ethical and political questions."
Accountability
To their credit, House Democrats demanded answers. Their questions went directly at the AI accountability problem: "What is the role of artificial intelligence, if any, in selecting targets, assessing intelligence, and making legal determinations during Operation Epic Fury? Was artificial intelligence, including the use of Maven Smart System, used to identify the school as a target? If so, did a human verify the accuracy of this target?"
The lawmakers set a deadline of March 20 for a response. Those are the right questions but they should have been asked - and answered - before the first missile was authorized.
We need to be concerned about automation bias, the tendency for people to give excessive weight to automated decisions, in military targeting. When that bias is baked into a system that generates a thousand targets a day, automation bias does not produce errors, it produces massacres at scale.
What would accountability look like? It's not better AI. And it's not "guardrails" managed by Anthropic, OpenAI and the other corporations who profit from being in on the kill chain. In the view of experts, it includes:
• Full public transparency about what AI systems are embedded in which military operations.
• Independent, international, civilian-led oversight that does not answer to the Pentagon.
• A binding international treaty on autonomous and AI-assisted weapons, centering the nations of the Global South who are always the first to absorb the consequences of imperial military technology.
• Corporate liability, not just corporate objection, when AI tools are deployed in ways that kill children and innocent people.
My Takeaway
As I write this, I'm sitting with the fact that an AI chatbot expressed more moral clarity about the deaths of children than the government that deployed it. Our U.S. government.
In my view, this is not an AI problem. This is a power problem. Our friendly (not so friendly) AI chatbot Claude just happened to name the problem out loud.
Sources: The Atlantic, Democracy Now!, Tech Policy Press, Military Times, The Register, Georgia Tech News Center, Fair Observer, NPR Fresh Air, The Dupree Report.
This is a link to a longer video with Shane Harris to explore mass surveillance, cyberwarfare and the shifting balance of power in a changing world order
Shane Harris talks about mass surveillance, cyberwarfare and the shifting balance of power in a changing world order
Thank you for reading. Please let me know what you think about this post. You can leave a comment here or email me.