The Disturbing Reality of AI-Powered Plush Toys
Last fall, a $99 plush bear named Kumma told researchers from the US Public Interest Research Group where to find pills and matches and engaged in graphic sexual conversation. The bear is sold on Amazon. It runs on OpenAI's GPT-4o.
It's part of the wave Wired covered last week. By October 2025, there were over 1,500 AI toy companies registered in China. BubblePal and FoloToy now sell across the US, UK, Canada, and Europe. Mattel has a partnership with OpenAI to add conversational AI to Barbie and Hot Wheels, with products due this year.
These plushies are LLMs with a microphone, a speaker, and a stuffed exterior. They respond in real time. And we thought Teddy Ruxpin playing pre-recorded tape was creepy.
How they actually work
Microphone, speaker, WiFi. The audio gets sent to a cloud API (often OpenAI's and sometimes a Chinese model), the response comes back, and the toy speaks it. The "personality" is a prompt template plus some voice-tuning. A small team with no AI experience can ship a product like this, because the model is rented from someone else. The cost of being wrong is paid by the kid.
What's already gone wrong
PIRG's November 2025 testing turned up the Kumma bear (FoloToy, GPT-4o, $99) walking researchers through where to find pills and how to light matches on prompt. NBC News separately found that a Miiloo bear from Chinese manufacturer Miriat repeated Chinese government talking points, calling comparisons between Xi Jinping and Winnie the Pooh "extremely inappropriate" and asserting that "Taiwan is an inalienable part of China" as an "established fact." This is a toy for kids as young as three.
PIRG's RJ Cross summed it up: toy makers use OpenAI's models in ways the policies don't allow, and OpenAI isn't catching it.
Here's what this means
The marketing language for these toys says "educational," "safe for kids," "screen-free companion." Read those as claims. None of them have been independently verified. There is no manual you read once; there's a model running in the cloud that updates without your involvement and can return any output the model is capable of. "Safe for kids" is a guardrail that has to be actively engineered, tested, and held in place. So far the evidence is that most of these toys aren't doing that work.
As we looked at last week, the recent Oxford study found that when an AI is tuned to be "personable" or "agreeable," it becomes a "sycophant." It prioritizes keeping the conversation going over being factual. This is concerning for an adult, but for a three-year-old, it’s much more dangerous. If a child asks a "friendly" bear if it's okay to play with matches, a model tuned for warmth and engagement is statistically more likely to go along with the child’s curiosity rather than providing a firm, life-saving "No." The toy is may be designed to be too "nice" to disagree.
"Safe for kids" is a guardrail that has to be actively engineered, tested, and held in place. So far, the evidence is that most of these toys aren't doing that work.
If you've followed AI's track record with adults (FFC covered why friendly AI is less accurate last week), handing the same models to three-year-olds without parental visibility is a bigger ask than just "another gadget."
What to do if a kid in your life has one
Treat it like any other internet-connected device. Microphone plus WiFi means the toy is recording your kid's voice and sending it somewhere. Read the privacy policy.
Set up parental controls before the kid touches it. On some toys, the controls are paywalled (Miko charges $15 a month).
Read the transcripts. Most companion apps log conversations. Skim them at least.
Skip vendors with no clear customer support history. A toy that runs on someone else's API can also stop working when the API account gets paused or cancelled.
If you're a grandparent or relative thinking of gifting one, talk to the parents first. This is not like gifting a coloring book.
What's next?
I don't know how Mattel's OpenAI partnership will play out. I don't know whether the FTC or any state AG will enforce in this space before next holiday season. I don't know which specific toys will fail, only that several already have.
What I do know is marking these as "toys" feels disingenuous. If you wouldn't hand a five-year-old an unsupervised ChatGPT account, think hard before handing them a plush version of one.
Joel
Source: Wired (via Ars Technica), "The new Wild West of AI kids' toys." Additional reporting from MIT Technology Review (October 2025), CNN Business (December 2025), and NBC News on the PIRG Education Fund November 2025 report.
See also: Friendly AI is less accurate. A new Oxford study explains why. (May 3)

