Can’t we just let kids be kids?

3 minute read


AI has a place in a modern world, but not in the toy cupboard.


Your Back Page scribbler recalls his uneventful childhood as being one of long periods of intense boredom punctuated by occasional outbreaks of mere tedium.

It’s not that we didn’t have toys with which to amuse ourselves. We did. We had an array of what would now be considered politically incorrect replica weaponry which we used to recreate all manners of imaginary carnage.

The key word here is “imaginary”. We used our imagination a lot, basically because we had to. It was the only way to combat the boredom.

What we did not imagine in those more innocent times, however, was a future world in which toymakers would replace our plastic machine guns and cutlasses with products powered not by childhood creativity but by artificial intelligence. 

It is with an air of foreboding that we read reports of global toymaker Mattel (Barbie dolls, Hot Wheels cars, Fisher-Price etc) has teamed up with tech giant OpenAI to design and develop toys powered by OpenAI’s large language models.

Now your ageing scribe does not consider himself to be a Luddite, and we fully accept that AI technology can and will have considerable beneficial applications in the fullness of time – just not in this instance.

If that seems a tad extreme, then you’d be ignoring the growing amount of evidence pointing to the deleterious impact of AI chatbots on mental health, particularly among children.

For example, earlier this year researchers at California’s Stanford School of Medicine’s Brainstorm Lab for Mental Health Innovation, released an AI risk assessment warning about teens interacting with a technology called AI companions – a type of AI chatbot designed to be as human-like and personable as possible.

The researchers concluded, unequivocally, that these technologies were not safe to be used by anyone under the age of 18.

“Companies can build better, but right now, these AI companions are failing the most basic tests of child safety and psychological ethics,” one of the Stanford researchers told media.

To be fair, at this stage neither the Mattel company nor OpenAI have made any comment on what their first products to emerge from the collaboration might be.

“We plan to announce something towards the tail end of this year, and it’s really across the spectrum of physical products and some experiences,” Mattel’s chief franchise officer Josh Silverman was quoted as saying.

“Leveraging this incredible technology is going to allow us to really reimagine the future of play.”

Given the current propensity for AI models to simply make up facts and, more importantly, break their own guardrails, that statement should really be setting off alarm bells for regulators.

While we know that it is possible to design an AI that is supposed to be safe for children to use, current evidence shows there is no guaranteeing that the bot won’t disobey its instructions. 

And don’t get me started on the threat-to-privacy implications built into these products.

Sadly, however, given the almost universal laissez faire track record on holding the tech titans responsible for the havoc they wreak, I strongly suspect absolutely nothing will be done until the tragedies start to pile up.

As the saying goes: history doesn’t always repeat, but it often rhymes.

Send organically generated story tips, while you still can, to holly@medicalrepublic.com.au.

End of content

No more pages to load

Log In Register ×