They’re cute, even cuddly, and promise learning and companionship — but artificial intelligence toys aren’t safe for kids, according to child and consumer advocacy groups urging parents not to buy them during the holiday season.
These games, which are marketed to children as young as 2, are generally powered by artificial intelligence models that have already been shown to harm children and teens, such as OpenAI’s ChatGPT, according to an advisory published Thursday by children’s advocacy group Fairplay and signed by more than 150 organizations and individual experts such as child psychiatrists and educators.
“The serious harms that AI chatbots inflict on children are well documented, including promoting obsessive use, sexually explicit conversations, encouraging unsafe behaviors, violence against others, and self-harm,” Fairplay said.
AI games, made by companies like Curio Interactive and Keyi Technologies, are often marketed as educational, but Fairplay says they can replace important creative and educational activities. They promise friendship, but they also disrupt children’s relationships and resilience, the group said.
“What’s different about young children is that their brains are being wired for the first time, and it’s natural for them to be trustworthy and to seek out relationships with kind, friendly personalities,” said Rachel Franz, Fairplay’s Young Children Thrive Offline Programme. For this reason, she added, the amount of trust young children place in these games can exacerbate the harms experienced by older children.
Fairplay, founded 25 years ago and formerly known as the Campaign for a Commercial-Free Childhood, has been warning against AI toys for more than 10 years. They were not as advanced as they are today. A decade ago, during the emergence of a fad for internet-connected toys and artificial intelligence speech recognition, the group helped lead a backlash against Mattel’s Hello Barbie talking doll, which it said recorded and analyzed children’s conversations.
“Everything has been released without any regulation or research, so this gives us extra pause when suddenly we see more and more manufacturers, including Mattel, which recently partnered with OpenAI, potentially coming out with these products,” Franz said.
This is the second major seasonal warning against AI toys since consumer advocates at US PIRG last week called out the trend in their annual “Trouble in Toyland” report that typically looks at a range of product dangers, such as high-powered magnets and button-sized batteries that young children can swallow. This year, the organization tested four games that use AI-powered chatbots.
“We found that some of these toys will talk in depth about sexually explicit topics, give advice about where a child can find matches or knives, act startled when you say you have to leave, and have limited or no parental controls,” the report said.
Young children don’t have the conceptual tools to understand what an AI companion is, said Dr. Dana Susskind, a pediatric surgeon and sociologist who studies early brain development. While children always relate to toys through imaginative play, when they do so they use their imaginations to create two sides of an imaginary conversation, “practicing creativity, language and problem solving,” she said.
“Toy AI is destroying this work,” Susskind said. “It answers immediately, smoothly, and often better than a human. We don’t yet know the developmental consequences of outsourcing this imaginative work to an artificial agent — but it is very plausible that it will undermine the kind of creativity and executive function that traditional pretend play builds.”
California-based Curio Interactive makes stuffed toys, such as the rocket-shaped Gabbo toy, which was popularized by pop singer Grimes.
Curio said it has designed “meticulously designed” guardrails to protect children, and the company encourages parents to “monitor conversations, track thoughts, and choose the controls that work best for their family.”
“After reviewing the PIRG American Education Fund’s findings, we are actively working with our team to address any concerns, while continually moderating content and interactions to ensure a safe and enjoyable experience for children.”
Another company, Miko, said it uses its own conversational AI model rather than relying on large, generic language modeling systems like ChatGPT in order to make its product — an interactive AI bot — safe for kids.
“We are always working to expand our internal testing, enhance our filters, and introduce new capabilities that detect and block sensitive or unexpected topics,” said CEO Sneh Vaswani. “These new features complement our existing controls that allow parents and caregivers to select specific topics they want to block from conversation. We will continue to invest in setting the highest standards for safe and responsible AI integration for Miko products.”
Miko products have been promoted by families of “social media influencers” whose YouTube videos have generated millions of views. On its website, the company markets its robots as “artificial intelligence. True friendship.”
Ritvik Sharma, the company’s senior vice president of growth, said the Mico actually “encourages kids to interact more with their friends, interact more with their peers, with family members and so on. It’s not just designed for them to feel connected to the device.”
However, Suskind and children’s advocates say analog toys are a better bet for the holidays.
“Children need a lot of real human interaction. Play should support that, not replace it. The most important thing to consider is not just what a toy does; it’s what it replaces. A simple set of blocks or a non-talking teddy bear forces a child to invent stories, experiment, and work through problems. AI toys often do that thinking for them.” “Here’s the cruel irony: When parents ask me how to prepare their children for the world of AI, unlimited access to AI is actually the worst possible preparation.”
Copyright © 2025 The Associated Press. All rights reserved.