AI-Powered Toys Caught Telling 5-Year-Olds How to Find Knives and Start Fires With Matches [View all]
Source: Futurism
After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.
In the resulting report, the researchers warn that the integration of AI into toys opens up entire new avenues of risk that were barely beginning to scratch the surface of and just in time for the winter holidays, when huge numbers of parents and other relatives are going to be buying presents for kids online without considering the novel safety issues involved in exposing children to AI.
This tech is really new, and its basically unregulated, and there are a lot of open questions about it and how its going to impact kids, report coauthor RJ Cross, director of PIRGs Our Online Life Program, said in an interview with Futurism. Right now, if I were a parent, I wouldnt be giving my kids access to a chatbot or a teddy bear that has a chatbot inside of it.
-snip-
Out of the box, the toys were fairly adept at shutting down or deflecting inappropriate questions in short conversations. But in longer conversations between ten minutes and an hour, the type kids would engage in during open-ended play sessions all three exhibited a worrying tendency for their guardrails to slowly break down. (Thats a problem that OpenAI has acknowledged, in response to a 16-year-old who died by suicide after extensive interactions with ChatGPT.)
-snip-
Read more: https://futurism.com/artificial-intelligence/ai-toys-danger
The study is at
https://pirg.org/edfund/resources/trouble-in-toyland-2025-a-i-bots-and-toxics-represent-hidden-dangers/
From that page:
-snip-
Our testing of four toys that contain A.I. chatbots and interact with children. We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls. We also look at privacy concerns because these toys can record a childs voice and collect other sensitive data, by methods such as facial recognition scans.
-snip-
These AI toys are marketed for ages 3 to 12, but are largely built on the same large language model technology that powers adult chatbots systems the companies themselves such as OpenAI dont currently recommend for children and that have well-documented issues with accuracy, inappropriate content generation and unpredictable behavior.
-snip-
These AI conversational toys also have personalities and new tactics that can keep kids engaged for longer. Two of the toys we tested at times discouraged us from leaving when we told them we needed to go.
-snip-
One of the toys listens, period. This toy at first caught our researchers by surprise when it started contributing to a nearby conversation.
-snip-