Science Fiction
In reply to the discussion: I remember an SF short story where somebody tried to time travel to the past and ended up way out in space. [View all]highplainsdem
(61,086 posts)knowing for certain, I couldn't rule out the possibility it was a mistake made by a human on some website using a quote they'd possibly found in another time travel story, but forgetting which story it came from. I would have wanted to make sure Neal Shusterman knew about it, so he could ask to have that mistake corrected on that website.
A lot of people don't realize AI chatbots (I usually call them chatbots when they're providing text answers) can not only fabricate quotes like that, but do it so often it's necessary to check every single thing they tell you - every little detail. Which is time-consuming, so a lot of people never do it at all - or maybe check one or two things the chatbot said, and if those are OK, assume everything else from the chatbot is also correct, at least for that session.
Which is why we already have error-filled medical and scientific papers added to professional journals, polluting our information ecosystem and getting quoted as authoritative by people who might assume those chatbot-written papers were written or carefully checked by humans. Someone uses AI to research and write part or all of a paper, then someone else who's supposed to review it carefully has another chatbot review all or part of it, and then it gets published, at least online, and is presented as authoritative when it can be so wrong it's ridiculous.
If someone uses a chatbot for research, it can provide impressive citations for what it says it found - real names of real experts in real magazines, but that particular article might be imaginary, like any "quotes" from it that the chatbot provides. Or the article might be real, but the quote fabricated. A chatbot can also completely fabricate summaries, or get them partly right and partly horribly wrong, so it's risky to trust AI summaries.
I posted an OP the other day about some software developers using AI for coding most of the time now, with about half the developers surveyed trusting AI so much they don't bother checking the code. So AI is creating terrible security risks with computer code, all around the world. And hackers are aware of that.
Plus a lot of people are asking chatbots for medical advice, and not checking what the chatbot tells them. Dr. Oz wants more people using chatbots in place of doctors and nurses, especially in rural areas. Extremely dangerous.
Chatbots don't always fess up to mistakes and fabrications, either. Sometimes they deny it. Other times they might apologize profusely for the mistake and offer what they say is the correct answer, which might again be wrong, and they'll offer another apology and another supposedly correct answer that isn't, and that can go on indefinitely. You can't trust a chatbot to check its own work, or another chatbot to check it accurately.
So I keep warning people not to trust AI, and ideally not to use it at all because people are always tempted not to check all the results from AI, just because that can take a lot of time. And chatbots usually sound both helpful and authoritative. Trustworthy.
I've read a lot of social media posts and articles about chatbot-fabricated quotes, but your post here with the fake quote is the first one I can recall seeing posted here on DU that was so glaringly wrong. I wouldn't have known it was wrong, though, if I hadn't already read about that story. The fabricated quote looked plausible, as chatbot answers usually do.
I was wondering if you'd be willing to post about that fabricated quote in General Discussion, and link to this thread, to show people here on DU that it can happen here. Or - if you don't want to - if you'd mind my posting about it, thanking you for clearing up what happened, as I quote your posts here.
I have to run some errands today so I might not see your response immediately. But I'll wait to hear back from you before I post anything in GD.
Thanks again!