Become a member
Follow us:

Teaching computers how to feel

Words: Robert Langkjær-Bain

Photos: Various

Follow us:

How are you feeling? Relaxed? Stressed? Bored? Today’s technology can be trained to judge your emotional state, and respond accordingly. Discover the science giving AI a heart.

Takeaways

  • Affective computing aims to bring emotional intelligence to technology.
  • Our actions are governed by emotions, so it’s important that the devices we interact with can interpret our feelings.
  • Emotional cues can be conveyed in the way we type or even how we handle a mouse.
  • Applications include developing robots that are better able to understand the needs of elderly patients, and cars that can keep drowsy drivers awake.
  • There is a risk that the tech could be used to track people’s emotions without their knowledge.

Computers are called computers for a reason. They’re machines that compute – in other words, that think.

In the early days of computing, this was more obvious than it is now. The inputs we gave to computers were numbers – selected on a dial, punched on a card or typed on a keypad. The outputs were more numbers – huge ones, typically. The cold, logical mind of the computer could be relied on to do all that hard thinking without getting tired, bored or ratty.

But over the years, we’ve asked ever more of computers. In today’s world of apps, social media and AI, the inputs we give them range from voice commands to gestures, or images captured through a camera lens. The output might be a move in a game, a message to a loved one, or a piece of music.

“We’re just scratching the surface of how autonomous systems will incorporate emotions”

We associate our devices with communication, entertainment and diversion, rather than with zeroes and ones. And our interactions with them are intensely personal and full of feeling.

But we never asked these machines to feel, only to think. Until now.

Feeling frustrated

“It looks like you’re writing a letter. Would you like help?” If you had a Windows computer in the late 1990s or 2000s, you’ll remember these words from Clippy, the friendly paperclip who was never far away.

Clippy’s intentions were good. Its aim was to humanise the interaction between computer and user, in the hope of preventing frustration. Sadly, the result was often the opposite.

To be fair, Clippy is not the only example of computers failing the empathy test. “Frustration has been the emotional response most commonly associated with technology,” says Javier Hernandez, one of a new generation of researchers and innovators on a mission to bring emotional intelligence to tech. It’s a field known as affective computing.

Hernandez wears several hats – he’s a researcher for Microsoft (that’s right, the same Microsoft that brought us Clippy) as well as an affiliate to the affective computing group at the Massachusetts Institute of Technology (MIT), and CEO of Global Vitals, a company specialising in physiological sensing, which grew out of MIT. “We’re just scratching the surface of how autonomous systems will incorporate emotions,” Hernandez says.

The emotion we’ve most often associated with tech has been frustration, says Javier Hernandez. We can do better. / Photo: Marta Boixo

Other businesses to have emerged from MIT’s affective computing group include Affectiva, which uses automated analysis of voice patterns and facial expressions to gauge people’s emotions while they watch on-screen stimuli, or even while they drive their cars.

The goal of affective computing is to give machines a heart – to bring empathy to tech. Its proponents recognise that emotions are baked into everything humans do and experience, including the way we interact and the way they make decisions.

From the stone age hunter feeling the “fight or flight” response on spotting a predator, to a child crying for their mother, this is how we operate. As a result, emotional intelligence is a highly prized attribute in the human world. Emotionally intelligent people tend to be happier, healthier, more popular and more influential.

Unfortunately, as we all know, the powerful computers and AI systems that run our modern world can often be short on emotional intelligence. The advocates of affective computing have a big job on their hands.

The time is now

The idea of bringing emotion to computing has emerged at a time when tech is becoming increasingly personalised. We’ve also seen the rise of wearables, huge advances in artificial intelligence, and a growing interest in mental health – heightened this year by the impact of the Covid-19 pandemic.

“We are social animals and when we interact we use emotion a lot”

All these factors mean the time is right for computers to learn the emotional skills that people take for granted. We’re not yet talking about computers that can actually “feel” (whatever that would mean), but we are talking about computers that can sense, interpret, respond to, and simulate human emotion in useful ways.

“We are social animals and when we interact we use emotion a lot,” says Hernandez. “If you’re in a conversation with a friend and you see that they’re sad and subdued, you would change the topic or the way of interacting.”

Emotion AI is about making these sorts of responses second nature to computers, too. Virtual assistants like Microsoft’s Cortana could learn to recognise emotional cues in people’s voice commands or responses, and infer useful information about what the user finds pleasing or irritating.

But affective computing isn’t just about more polite gadgets. The ability for computers to make these kinds of judgements could have big implications for our mental and even physical health, says Hernandez. “If you’re driving in a car, if the car feels you’re drowsy or tired, maybe it can intervene to wake you up, or the voice might sound different. If you have a robot at home, for example taking care of elderly people, you want this emotional layer for the robot to understand when it is needed, when is an appropriate time or a good time to interact.”

    Other applications relevant for Microsoft include gauging people’s emotional state when they’re using search engines, responding to emotions to make Xbox games more engaging, and designing workplace tools that minimise stress and improve productivity.

    Emotional cues can come from something as simple as the way people type, or the way they handle a mouse. Microsoft makes pressure-sensitive keyboards and multitouch mice that sense the exact position of the fingers, and can identify signs of stress. In combination with other technology it’s possible to measure other biometric indicators such as heart rate.

    Risks and benefits

    There are big possibilities – and big risks too. Hernandez is conscious of the potential for this kind of tech to be used irresponsibly, to track people’s emotions without their knowledge, for instance, or to spy on employees. And as with any form of AI, there’s a risk of bias if, for example, facial analysis tools are trained on a sample of people who don’t represent the full variety of the general population. (Grad students are the usual guinea pigs, Hernandez points out, and they tend to be disproportionately young and white.)

    “Sometimes you do just want a computer that behaves like a computer”

    “We’re starting to see more regulation, which I think is good and I think is needed,” says Hernandez. “Technology moves quickly and it can be difficult for policymakers to catch up – these are completely new questions.”

    There are of course times when you want “a computer that behaves like a computer”, says Hernandez. But on the whole, emotion is underrated. “For a long time emotion has been seen as something unreliable that makes you irrational, that is not good for your decisions. But over time, different scientific studies have shown that emotions are at the core of our development. If we didn’t have that, it would really impair many facets of our lives. The reality is humans are emotional, so we have to understand that.”

    0:00