A new study from the University of Cambridge found that AI-enabled toys for young children can misinterpret emotional cues and are ineffective at supporting critical developmental play. The conclusions could be concerning for parents.
In one report examining how AI affects children in their early years, a chatbot-enabled toy struggled to recognize social cues during playtime. Researchers found that the toy did not effectively identify children’s emotions, raising alarm about how kids might interact with it.
The report recommends regulating AI toys for kids and requiring clear labeling of their capabilities and privacy policies. It also advises parents to keep these devices in shared spaces where kids can be monitored while playing.
The research behind the study had a limited number of participants, but was done in multiple parts: an online survey of 39 participants with kids in their earlier years, a focus group with nine participants who work with young children and an in-person workshop with 19 leaders and representatives from charities that work with early-years kids. That was followed by monitored playtime with 14 children and 11 parents or guardians with Gabbo, a chatbot-enabled toy from Curio Interactive.
Some findings indicated that the AI toy supported learning, particularly in language and communication skills. But the toy also misunderstood kids and sometimes responded inappropriately to emotional requests.
For instance, when one child told the toy, «I love you,» it responded, «As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed,» according to the research.
Jenny Gibson, a professor of neurodiversity and developmental psychology at the Faculty of Education at Cambridge, who worked on the study, said that while parents may be excited about the educational benefits of new technology aimed at children, there are plenty of concerns.
Gibson posed overarching questions about the reason behind the tech.
«What would motivate [tech investors] to do the right thing by children … to put children ahead of profits? she said»
Gibson told CNET that while researchers are exploring the potential benefits of AI-based toys, risks remain.
«I would advise parents to take that seriously at this stage,» she said.
What’s next for AI toys
As more playthings are enabled with internet connectivity and AI features, these devices could become a major safety risk for children, especially if they replace real human connections or if interactions are not closely monitored.
Meanwhile, younger people are increasingly adopting chatbots such as ChatGPT, despite red flags. Multiple lawsuits against AI companies allege that AI companions or assistants can impact young people’s psychological safety, including some chatbots that have encouraged self-harm or negative self-image.
AI companies such as OpenAI and Google have responded by adding guardrails and restrictions for AI chatbots.
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Gibson said she was surprised by the enthusiasm some parents showed for AI toys. She was also alarmed by the lack of research on AI’s effects on young children, noting that companies making such products should work directly with children, parents, and child development experts.
«What’s missing in the process is that expertise of what is good for children in these kinds of interactions,» she said.
Curio Interactive, the company behind the Gabbo toy, was aware of the research as it was happening but was not directly involved, Gibson said. The toy was chosen because it’s directly marketed to young kids, and the company had an understandable privacy policy. Gibson said the company seemed supportive of the project.
A representative for Curio did not immediately respond to a request for comment.
