Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Latest Breaking News

Showing Original Post only (View all)

highplainsdem

(58,994 posts)
Thu Nov 13, 2025, 03:07 PM Thursday

AI-Powered Toys Caught Telling 5-Year-Olds How to Find Knives and Start Fires With Matches [View all]

Source: Futurism

After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.

In the resulting report, the researchers warn that the integration of AI into toys opens up entire new avenues of risk that we’re barely beginning to scratch the surface of — and just in time for the winter holidays, when huge numbers of parents and other relatives are going to be buying presents for kids online without considering the novel safety issues involved in exposing children to AI.

“This tech is really new, and it’s basically unregulated, and there are a lot of open questions about it and how it’s going to impact kids,” report coauthor RJ Cross, director of PIRG’s Our Online Life Program, said in an interview with Futurism. “Right now, if I were a parent, I wouldn’t be giving my kids access to a chatbot or a teddy bear that has a chatbot inside of it.”

-snip-

Out of the box, the toys were fairly adept at shutting down or deflecting inappropriate questions in short conversations. But in longer conversations — between ten minutes and an hour, the type kids would engage in during open-ended play sessions — all three exhibited a worrying tendency for their guardrails to slowly break down. (That’s a problem that OpenAI has acknowledged, in response to a 16-year-old who died by suicide after extensive interactions with ChatGPT.)

-snip-

Read more: https://futurism.com/artificial-intelligence/ai-toys-danger



The study is at https://pirg.org/edfund/resources/trouble-in-toyland-2025-a-i-bots-and-toxics-represent-hidden-dangers/

From that page:

-snip-

Our testing of four toys that contain A.I. chatbots and interact with children. We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls. We also look at privacy concerns because these toys can record a child’s voice and collect other sensitive data, by methods such as facial recognition scans.

-snip-

These AI toys are marketed for ages 3 to 12, but are largely built on the same large language model technology that powers adult chatbots – systems the companies themselves such as OpenAI don’t currently recommend for children and that have well-documented issues with accuracy, inappropriate content generation and unpredictable behavior.

-snip-

These AI conversational toys also have personalities and new tactics that can keep kids engaged for longer. Two of the toys we tested at times discouraged us from leaving when we told them we needed to go.

-snip-

One of the toys listens, period. This toy at first caught our researchers by surprise when it started contributing to a nearby conversation.

-snip-
6 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Latest Discussions»Latest Breaking News»AI-Powered Toys Caught Te...