A small infant wakes in the night. Light gurgling unravels into rapidly escalating cries. Today, exasperated parents would race to comfort the child - but what if instead an intelligent crib responded with a combination of gentle rocking and white noise, lulling the infant back to sleep?

This might seem nothing to object to - especially for sleep-deprived parents - but it raises the question, how willing should we be to outsource our children's earliest experiences of comfort to a machine?

robot heart

The crib described already exists - it’s the Snoo cot, developed by superstar celebrity pediatrician and bestselling author, Dr. Harvey Karp. This is just one of a new breed of robots on the horizon that are developed to cater to our emotional needs.  

While Snoo's AI is fairly basic, social robots like Jibo or Cozmo are designed to play with children and read them bedtime stories - or perhaps help Grandma send a video message, learning the personalities of every household to better tailor their actions towards them.  

Does this represent an obvious next step for the Alexas of tomorrow? Moving beyond the purely functional into the role of witty interlocutor, plying us with light-hearted banter, and enmeshing us ever further into computerised dependence? 

Some have predicted that in the workplace of the future we’ll each have a robotic personal assistant, but there’s growing scope for much more than that.

Reviewing Jibo, Wired noted how quickly the relationship with the social robot evolved from Alexa-like functionality to something deeper, more personal and affectionate.

What is there to object to? Critics point to the element of deception inherent in human-robot interactions. While the robot is trained to say things that imply it has a conscience,  a past and emotional sensibilities, this is blatantly not true. The robot cannot understand or love you, but there is an attempt to hoodwink the user to believe otherwise.

The question is: is this a problem?

Is it dangerous for us to enjoy bantering with a robot under the pretence that it can engage on the same emotional and cognitive level as us? 

Many academics in this area, including Matthias Scheutz, professor of cognitive and computer science, and director of the Human-Robot Interaction Laboratory at Tufts University, are deeply disturbed by this idea.

"The robot might express particular emotion in its face, but it has no way of actually feeling that emotion," he tells Techworld. "It's really not anything close to what humans, or maybe animals, would feel. That, I find highly problematic. Why? Because, in that case, the robot pretends to do something, or to be in a particular state, but for what purpose?"

Most adults will understand that a level of subterfuge is at play and might well be happy to accept it in pursuit of a 'fun' machine-human relationship.

A case where the lines are clearly blurrier is in terms of people who haven’t fully developed cognitively and emotionally: children.

Should we be encouraging children to form relationships with what is essentially an inanimate hunk of metal? More worryingly, could these ‘relationships’ impact their emotional development?

Some hypotheticals are easy to imagine: the child who struggles to make friends who don’t pander to her in the same way as the family’s companion robot, or turns to a machine for comfort, understanding, or a bedtime story. Robots that take on the emotional labour of raising children in early life could even distort the normal parent-child caregiver relationship.

Sort of alive

A 2001 study examined how children aged between 8-13 interacted with early social robots Kismet and Cog, and how they reflected on these interactions afterwards. They found that the children saw the robots as ‘sort of alive’, even believing they had emotions and thoughts, and could care about them.

One 11 year old girl said: "It's not like a toy, because you can't teach a toy, it's like something that's part of you, you know, something you love, kind of, like another person, like a baby."

In an article written afterwards by co-author Sherry Turkle this assignment of the ‘child’ or ‘baby’ label seems to be commonly bestowed by children onto social robots.

However, studies have shown children to be more discerning at times than we give them credit for.

In one study, children could differentiate their grandma speaking live through Facetime from a pre-recorded video of her speaking but played in the same format. If they are adept enough to discern this, surely they could successfully differentiate robots from humans.

After all, the kids in the study didn’t think of the robots as ‘human’ equivalents, and the assigning of the ‘baby’ label implies they thought of them as inferior to themselves. Is understanding that robots are ‘different’ enough to defend against any potential harmful effects? What about when these robots get better?

At this moment no one can say for sure whether social robots would damage a child's capacity for empathy, or even if the opposite effect would come about through accelerating the child's understanding of different modes of 'being'.

Currently not many people argue for the latter position but it’s not impossible. And some research found that young children were intensely interested in the question of whether these robots were ‘alive’, ‘dead’ or something else.

Another view could posit that a chatty robot is in fact preferable in a world where children are glued to their phones and tablets.

This is how the creator of Jibo pitches it - that a social robot will help to open up dialogue and stimulate interaction within the family.   

The fact that robots and personal voice assistants are developing down this route is no accident.

Hear it in the words of Boris Sofman, the CEO of Anki, the company behind social robot, Cozmo. He says the idea is to create "a deeper and deeper emotional connection... And if you neglect him, you feel the pain of that".

The pain.

It would appear then that social robots of the future will expertly tug on our emotions, and that this will be wholly intentional. But for what purpose? Surely only to increase how compelling these robots are to speak to and therefore suck us further into these relationships. In the attention economy, engagement is king, and if seemingly ‘emotional’ robots are more fun to speak to, then these are the lines along which they are going to develop.

Last year, Chinese researchers created the Emotional Chatting Machine, a bot able to produce factually sensible answers whilst also infusing conversation with emotions including happiness, sadness or disgust.

The robot was trained on a vast dataset of different emotionally flavoured posts taken from the Chinese social networking site Weibo, indicating that social media may well constitute the training ground for these robots. The research found that 61% of the participants preferred speaking to the emotional chatbot over its neutral counterpart. 

"There's nobody at home"

Some of the appeal of these robots may even come by virtue of them being, well, robotic.

A study showed that people were more honest and detailed in their health reports to robot doctors, and Eugenia Kuyda, head of artificial intelligence firm Replika, found that after immortalising her husband in chatbot form, she was telling him things she never would have shared while he was alive. It seems the lack of judgement expressed by machines mean we feel comfortable being our true selves more so even than in the company of our most loved humans, something that seems impossibly sad.

These robots have been predictably alighted upon as a solution to the loneliness epidemic afflicting modern day society. But this could simply translate into a new form of loneliness, in much the same way as a lonely person with a dog is likely to feel their lives are still lacking something essential.

Will humans really be content to shelter in these lobotomised relationships, devoid of any true, meaningful connection? Some people might gladly spend the rest of their lives consorting solely with robots, but for the vast majority it seems uncharitable to assume these form of relationships will sate their need for connection.

"We don't want the robot to entirely replace people," says Scheutz. "We don't want the robot, just because it's convenient, to then be all the social interactions the person has."

Implicit is the risk that social robots could end up promoting a kind of emotional cowardice, where people shelter from the demands and difficulties of human relationships in simple, unidirectional relationships that can be engaged with or discarded at will.

Another vast area raising questions is the potential for these robots to emotionally manipulate adults and children alike.

The robots are designed to be cute, already exploiting years of evolutionary biology which means humans are more predisposed to offer protection to faces with ‘baby-like’ attributes. Often, these robots, such as Jibo, speak in childish voices too. Add to that a vast knowledge of emotional and social cues absorbed from huge databases and their power to pressure our emotions is undeniable. 

"Take the case of a toy robot that is very realistic in its facial expressions," says Scheutz. "The robot starts crying and says to the human, 'Mommy, you don't love me, anymore.' It makes a kid really upset. Is that okay? Is that a good use of technology? This will lead to behaviour where the human is trying to appease the robot, or console the robot, or do something nice for the robot, even though there's nobody at home - it's just a machine."

Humans have proved it very easy to draw us into these kind of emotional bonds. The Wired reviewer mentioned that he and his partner felt guilty about switching Jibo off and 'leaving him' all day. While in a recent experiment people found it difficult to turn a robot off after it begged them not to, telling them it was, 'scared of the dark'.

These effects would surely be even stronger with a more advanced live-in robot.

"If you have a robot that gets you out of bed every day, and does something nice for you, helps you around the house, makes your life easier, feeds you, does all these things, of course you're going to develop gratitude towards it," says Scheutz. "It's very clear to me that there's going to be a very natural tendency for people to form these attachment relationships with machines that do something nice and helpful, for them, even when the machine has no way of reciprocating." 

Tech companies are no stranger to emotional manipulation.

Facebook has previously tested out an emotional manipulator on thousands of unwitting  participants, whose emotions were crudely influenced through the type of posts they were shown on their newsfeeds - and their own reactions were monitored to see the effects.

How could this play out instead on our very own social robots, who suddenly become petulant or aggressive, in the hopes of gauging the outcomes: do people spend more when they’re happy, or sad?

"The robot, in the interest of the company that builds and sells it, could abuse the emotional bond that has formed, to get the person to purchase things," says Scheutz. "Say, 'Look, I'm not happy, anymore, watching movies on this TV. I really don't like it. If you love me, then please do x.'"

These issues come alongside other colossal questions of privacy and security.

"What I've been arguing for years, now, is that we absolutely have to integrate ethical mechanisms into machines, so that they understand our expected norms, what we expect others to do in different contexts, but also to prevent ethical transgressions: lying, cheating, misinforming people," says Scheutz.

His lab has been working on, "developing algorithms, representations of norms, and reasoning mechanisms that allow robots to reason explicitly through contexts and situations, to determine whether a particular action is appropriate or not appropriate."

Coding ethics into robots is incredibly complicated, bringing with it yet more questions of how to achieve doing this without bias - but unless we want to be potentially faced with an ethical landmine down the line, it's crucial to start the groundwork now.