23 jul – Reading time 9 min
A guide to parenting by AI Barbie
In the nineties, Tamagotchi was hugely popular amongst kids; the egg-shaped key chain wasn’t to be let out of your sight, ever. Four simple buttons gave millions of children worldwide the ability to feed snacks and attention to their pixelated pet, so as to keep it from dying a tragic, virtual death.
The rise of artificial intelligence, machine learning and face, emotion and voice recognition technologies, has advanced our robotic friends in a very significant way. It might feel awkward for you to confide in Pepper, Zora, Hello Barbie, Alice, Sam, Kayla, iCat, Paro or Robear, but for some kids, this is their daily reality.
New ways of communicating
Because of the use in healthcare and education, the ethical debate about robotic companionship has existed for quite some time. Society has debated the social value of interactive robotic contact. Some people feel that robots can only simulate understanding and empathy and are therefore only second-best to real human contact. There are also worries about the way social robots change human communication. If our companions do not have a consciousness, this could make us humans less social. Some even say that kids commandeering Alexa could lead to them talking in the same way to their parents.
Others point to the valuable communication that social robots could stimulate. An example is robots that are used to help autistic kids make contact with other people. But we need different frameworks and categories to understand these new forms of interaction. As a study has confirmed, kids categorize their robotic friends somewhere in the range between ‘alive’ and ‘not alive’.
Strikingly, there is no such debate regarding smart toys - except some debates about safety concerns - even though the Internet of Toys is an emerging market that can heavily influence the identity development and socialisation of young kids.
Big Barbie is judging you
The media exhaustively warns against smart Barbies that can be hacked, can be listened into, can be used to scrape data and to record videos. But parents themselves can not be let off the hook. Some parents are spying on their kids 24/7 with their own version of smart toys, with tracking apps or teddy bears with built-in cameras. Pedagogues are worried that kids are not able to explore their boundaries anymore or that they no longer dare to make mistakes because they always feel like their parents are watching them.
The current debate about smart toys is mostly focused on privacy. ‘Big Barbie is watching you’ is on our radar, but what about ‘Big Barbie is judging you’?
Kid: "I feel shy trying to make new friends."
Barbie: "Feeling shy is nothing to feel bad about. Just remember this, you made friends with me right away."
Barbie seems to have distinct ideas about friendship, which proposes the question about which other subjects she might have (strong) opinions. What would she for example answer if you would ask her if she is religious? Or more importantly, what do the programmers think that AI Barbie should answer when a kid asks her if she is religious?
The responses that have been programmed into smart toys indirectly contain value judgments about the world surrounding us. These judgments influence the way in which kids develop their identities. In May of 2018, Stefania Druga presented the first results of her study ‘My Doll Says It’s OK: Voice-Enabled Toy Influences Children’s Moral Decisions’. The most important conclusion: smart toys have more influence on the moral choices of children than humans do. And, they change the way children play, fantasize and work together.
If parents buy an AI Barbie or AI Cayla, they are not just bringing a talking doll into their homes – they will have to deal with an AI guide to parenting too. They need to learn to cooperate and co-educate with it. Are the pre-programmed beliefs about education pedagogically justifiable and who creates them?
Parenting is something everyone has an opinion about. Currently, there’s a large debate about so-called ‘curling parents’. These are parents – just like in the sport of Curling– that sweep away all obstacles for their kids. Pedagogues are afraid that this does not allow children to develop into resilient adults that are able to deal with friction. If friction is so essential to our development, we might also need smart toys that contradict us now and then.
Authoritarian or laissez-faire
We do not know a lot yet about what parental guidelines are programmed into smart toys and the way in which AI intervenes in the relationship between parents and their children. It is important to further research this, to get an idea of how we want smart toys to function. This also means that we have to be aware that smart toys reflect the values, judgements and stakes from the world it is developed in.
Together with Stefania Druga, the Creative Learning Lab of Waag and SETUP organized a hackathon to design several provotypes (provoking prototypes). We programmed AI toys with different styles of upbringing and explored if it was possible to give kids space for a freer, more creative way of playing. How predictable is AI Barbie and will she be more of an authoritarian or more laissez-faire? By developing possible scenarios and discussing different perspectives with researchers, the event kickstarted a lively dialogue about smart toys, and showed that there is much more to discuss.
An important realization is that when we talk about for example human-robot friendships, we think of these concepts in holistic terms. This limits us from thinking about which aspects of friendship AI or smart toys can replace, which does not mean that they replace friendship completely. What aspects we can and want to replace is a question worth exploring. After all, the AI guide to parenting is something we will have to develop together, not something we should leave up to toy manufacturers alone.