General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsWhat AI says about who owns Tomahawk missiles.
It is without question exclusively made by the US. But other countries who have purchased them include U.K, Netherlands, Australia and Japan.
What's missing? No middle eastern countries, including Israel.
So, Trump's attempt to make an oily statement that other countries own them is going to fail to remove culpability from the US.
pat_k
(13,214 posts)The United States is responsible for the destruction of the Shajareh Tayyebeh girls' elementary school in Minab.
It is ABSURD to keep denying this war crime.
And it was not an "accident." Our military had an obligation to know what was there before they destroyed all the structures in and around the IRGC compound.
This article reports:
The United States acknowledges targeting the IRGC naval forces base in Minab on 2/28/2026. The U.S. is operating warships in the Arabian Sea, including the USS Abraham Lincoln aircraft carrier, within range of the school and IRGC naval forces compound.
Israel, which has denied conducting the strike, has focused on areas of Iran closer to Israel and hasnt reported any strikes south of Isfahan, 800 kilometers (500 miles) away.
The NYTimes (link below) compares a satellite image of Minab taken 5/14/2024 with an image taken 3/4/2026. It is clear that structures in the compound AND the school were destroyed.
https://www.nytimes.com/2026/03/05/world/middleeast/iran-school-us-strikes-naval-base.html?unlocked_article_code=1.RlA.2EcF.5qU_8h7jZBgA&smid=url-share
highplainsdem
(61,569 posts)stories like the one I posted in LBN yesterday. And there are reliable sources of info online about which countries have Tomahawks.
But you asked a hallucinating chatbot. Did you check the hallucinating chatbot's answer by looking at a non-hallucinating source? AI companies advise that. It's standard for them to have notifications that their flawed AI makes mistakes, and often the notification includes advice that you check what the chatbot told you. They're not doing this out of kindness or concern for the truth. It's a CYA that they hope will keep them from getting sued and shift legal liability to the user.
Google's a bit cagier about that now, since their handy hallucinating AI Overview just says this on the Google results page
AI responses may include mistakes. Learn more
but when you click on that, it takes you to
https://support.google.com/websearch/answer/14901683
where you'll read
Important: AI responses may include mistakes. Learn about generative AI and its limitations.
and when you click on that, it will take you to
https://support.google.com/websearch/answer/13954172
where you'll find this:
Because generative AI is experimental and a work in progress, it can and will make mistakes:
It may make things up. When generative AI invents an answer, it's called a hallucination. Hallucinations happen because unlike how Google Search gets information from the web, LLMs don't gather information at all. Instead, LLMs predict which words come next based on user inputs.
For example, you might ask, Whos going to win women's gymnastics at the 2032 Brisbane Summer Olympics?" and get a response, even though the event hasn't happened yet.
It may misunderstand things. Sometimes, generative AI products misinterpret language, which changes the meaning.
For example, you may want to learn more about bats, the animal that lives in caves. If you ask for information about bats, it might tell you about the bats used in baseball, cricket, and softball.
Always evaluate responses
Think critically about the responses you get from generative AI tools. Use Google and other resources to check information thats presented as fact.
If you come across something that isnt right, report it. Many of our generative AI products have reporting tools. Your feedback helps us refine the models to improve generative AI experiences for everyone.
Highlight added.
Were you aware that genAI hallucinates? If you weren't, I hope you'll keep that in mind in the future, and check a reliable source, as Google advises.
If you were already aware genAI hallucinates, did you check a more reliable source? If not, did you just decide anything genAI says is good enough to post on DU, and identifying it as AI relieves you of any responsibility for posting possibly incorrect information?
Baitball Blogger
(52,183 posts)my question. None did, so I took the AI summary and made the post with the AI source included. What I wanted to know the most was whether any middle eastern country owned tomahawks, and it seems to still stand that they don't.
highplainsdem
(61,569 posts)The answer matches info I looked at, too, but I've seen so many wildly inaccurate responses from AI, and not everyone using AI checks.
Not long ago I'd answered another DUer's question about the name of a science fiction story the OP vaguely remembered. I'd googled it and added links.
Later that day another DUer posted a chatbot's answer that got the title and author right, but the chatbot included what was supposed to be a quote from the story, a quote it had hallucinated with a character not in the story and plot details wrong.
I wish so much that the AI companies hadn't released something this flawed as AI. Now we have all sorts of stuff online written by AI, even medical and scientific papers, filled with hallucinations.
Baitball Blogger
(52,183 posts)It pays to double check.
highplainsdem
(61,569 posts)the last few years, and while some were harmless and funny
Cartoonist/author Martin Rowson of the Guardian asked Google's AI to name his wife. The results were hilarious.
https://www.democraticunderground.com/100221008588
some were potentially very dangerous:
'Unbelievably dangerous': experts sound alarm after ChatGPT Health fails to recognise medical emergencies (The Guardian)
https://www.democraticunderground.com/100221066192
The maddening thing about LLM hallucinations is that the bots can sound so convincing that people tend to trust them, and verifying accuracy and proofreading is time-consuming, and people often are using AI primarily to save time. I understand being tempted to save time.
But I don't know if we'll ever be able to clean up all the AI slop that's already flooded the internet in just a few short years. And the stories I've read about security risks from AI-generated code have me hoping some large company will have enough trouble from AI code to scare most people away from using it, before there are really catastrophic hacks and failures.