The debate started as soon as the Washington Post reported that the Pentagon has been using Anthropics advanced AI program Claude to both identify bombing targets in Iran and also prioritize them. As anyone whos used AI or encountered it during online searches knows, AI tools are trained on the information that has been previously published, so it makes sense to ask whether Claude if in fact used here wrongly believed the elementary school building was part of the naval base as it apparently had been at one time.
There are huge and justifiable concerns about handing life-or-death decisions to robots, especially one still experiencing growing pains. In 2024, I wrote a column about Israels reported use of AI programs to target its massive bombing of Gaza, which is responsible for many of the 74,000 reported deaths there. But I also argued then that the Israeli program called Lavender is issuing death warrants for toddlers and their mothers because it reflects the inhumanity that we programmed it with.
and an archive link (past the paywall)
https://archive.ph/5K9hZ#selection-1265.0-1281.335