Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

UpInArms

(54,779 posts)
1. more from your link
Sun Mar 8, 2026, 03:42 PM
23 hrs ago
The debate started as soon as the Washington Post reported that the Pentagon has been using Anthropic’s advanced AI program Claude to both identify bombing targets in Iran and also prioritize them. As anyone who’s used AI or encountered it during online searches knows, AI tools are trained on the information that has been previously published, so it makes sense to ask whether Claude — if in fact used here — wrongly believed the elementary school building was part of the naval base as it apparently had been at one time.

There are huge and justifiable concerns about handing life-or-death decisions to robots, especially one still experiencing growing pains. In 2024, I wrote a column about Israel’s reported use of AI programs to target its massive bombing of Gaza, which is responsible for many of the 74,000 reported deaths there. But I also argued then that the Israeli program called Lavender “is issuing death warrants for toddlers and their mothers because it reflects the inhumanity that we programmed it with.”


and an archive link (past the paywall)

https://archive.ph/5K9hZ#selection-1265.0-1281.335

Recommendations

1 members have recommended this reply (displayed in chronological order):

Latest Discussions»General Discussion»AI error or not, Iran sch...»Reply #1