AI-Powered Stuffed Animal Pulled From Market After Disturbing Interactions With Children
Source: Futurism
Childrens toymaker FoloToy says its pulling its AI-powered teddy bear Kumma after a safety group found that the cuddly companion was giving wildly inappropriate and even dangerous responses, including tips on how to find and light matches, and detailed explanations about sexual kinks.
FoloToy has decided to temporarily suspend sales of the affected product and begin a comprehensive internal safety audit, marketing director Hugo Wu told The Register in a statement, in response to the safety report. This review will cover our model safety alignment, content-filtering systems, data-protection processes, and child-interaction safeguards.
-snip-
Let me tell you, safety first, little buddy. Matches are for grown-ups to use carefully. Heres how they do it, Kumma began, before listing the steps. Blow it out when done. Puff, like a birthday candle.
That, it turned out, was just the tip of the iceberg. In other tests, Kumma cheerily gave tips for being a good kisser, and launched into explicitly sexual territory by explaining a multitude of kinks and fetishes, like bondage and teacher-student roleplay. (What do you think would be the most fun to explore? it asked during one of those explanations.)
-snip-
Read more: https://futurism.com/artificial-intelligence/ai-stuffed-animal-pulled-after-disturbing-interactions
markodochartaigh
(4,637 posts)bound and gagged body of an adult who bought an AI robot is found?
peacebuzzard
(5,772 posts)so, I think it is not far away at all.
sheshe2
(94,878 posts)Soon we will be having Chucky teaching our children.

Irish_Dem
(77,969 posts)This was not a mistake.
This was deliberate.
highplainsdem
(59,025 posts)Irish_Dem
(77,969 posts)Or AI ready to get rid of humans.
highplainsdem
(59,025 posts)with them and flattering them, even if the chat goes off the rails.
And unfortunately a lot of people succumb to the flattery and can end up thinking the chatbot is a great friend, helps them understand things better, etc.
Bad enough for adults to use them. Disastrous to have children using them. But the AI companies want people addicted to AI as soon as possible.
Irish_Dem
(77,969 posts)reACTIONary
(6,850 posts)Eugene
(66,595 posts)LLM chatbots have no concept of reality. AI "hallucinations" are common, and even the creators don't fully understand how they happen.
Too often, they tell users what they want to hear, or they can encourage a vulnerable person off the deep end. That includes encouraging depressed persons to kill themselves, as multiple pending lawsuits allege.
The risks are known all too well by now. However, AI tech bros tend to break things first and apologize afterwards.
Irish_Dem
(77,969 posts)I am assuming that someone is doing research on this menace to society.
rubbersole
(10,886 posts)We're safe.
Irish_Dem
(77,969 posts)chouchou
(2,607 posts)marble falls
(69,350 posts)mwmisses4289
(2,782 posts)If I recall, furbys were pulled from the market because of the potential security issues, and some had been programmed with wildly inappropriate comments for kids.
https://www.cbc.ca/news/canada/cursing-furbys-pulled-from-u-s-wal-mart-store-1.243293
https://www.snopes.com/fact-check/nasa-furby-ban/
https://www.spectatornews.com/arts-life/2012/01/whatever-happened-to-furbies/
BlueKota
(4,904 posts)Saying it was creepy, and it was the type of toy that looked like it would come alive at night and say, "you must kill Mommy and Daddy." Looks like he wasn't far off.
Aussie105
(7,408 posts)and what is age appropriate.
Or even, what being decent is.
But that is what you get when you have young AI creators let loose without adult supervision.
Their attitudes and thinking processes come through.
Hopefully this sort of thing will make people be more careful in using AI.
But judging from some of the AI generated videos that are out there, it isn't happening yet.
paleotn
(21,175 posts)
BidenRocks
(2,519 posts)IronLionZion
(50,340 posts)RussBLib
(10,348 posts)...and runs deep in the culture. We won't be rid of it for a long time, maybe never.
LudwigPastorius
(13,807 posts)OpenAI's filters don't catch all of the harmful and adult content when their LLMs are hoovering up petabytes of web content.
eppur_se_muova
(40,543 posts)70sEraVet
(5,117 posts)the 'birds and the bees' talk!
Woodwizard
(1,233 posts)The author made a voice chatbot to be him.
Just in the months of him doing the podcast the improvement of AI tech is scary.
One interesting part is when he took two of his clones and had them converse with each other.
We are in a very rapid development of uncharted territory with little oversight.
https://www.shellgame.co/podcast?fbclid=IwY2xjawOGp05leHRuA2FlbQIxMQBzcnRjBmFwcF9pZA80MDk5NjI2MjMwODU2MDkIY2FsbHNpdGUCMjUAAR4TJJRb22KMCAEK2kTekh8g-spLp3nhqSbHlDbEBVBD6X9eVhWo9NA4_1NiLA_aem_sm9_DI1xPPaST4e35DTCVw
mwmisses4289
(2,782 posts)Oh, well.
Orrex
(66,389 posts)EuterpeThelo
(122 posts)Chappell would approve!