Skip to content

By a guest author on

In Interview: Peter McOwan

Meet roboticist Peter McOwan, Professor of Computer Science in the School of Electronic Engineering and Computer Science at Queen Mary, University of London. At Robotville Peter will be showcasing software he has built that helps robots understands our emotions
Professor Peter McOwan
Professor Peter McOwan

Meet roboticist Peter McOwan, Professor of Computer Science in the School of Electronic Engineering and Computer Science at Queen Mary, University of London. At Robotville Peter will be showcasing software he has built that helps robots understands our emotions

How did you become involved in robotic research?

My interest started in artificial intelligence, understanding the brain using maths, then seeing how I could use that maths in a robot to help give it human like abilities, still a long way to go!

Your software teaches robots to understand emotions but how long until a robot shows preference, or ” falls in love” with someone?

Ah well ‘what is love’? Love shows in the way you feel and act, it’s how you take in information about the object of your desire, and react to them. Arguably then, love is really just a special type of brain information processing. Deep inside our brains when we fall in love the nerve cells change the way they connect and signal to each other. Perhaps in the future we will understand how this happens and perhaps be able to build a machine that can perform this complex information processing task. At present our robot can read your expressions and it can be programmed to respond to them, smiling back at you when you smile at it – awww bless. The robot can even ‘know’ who you are and pull an especially sweet looking face when you look and smile its way, but would that be love? Not quite yet I expect.

Would that ever lead them to reprioritise or even change their programming?

If falling in love changes the way a human acts, they way their brain cells interact, then mimicking this in a computer could cause the same type of effect. But robots are basically complicated mechanical and electronic tools. Would it be useful for your vacuum cleaner to recognise you? Possibly yes so it can’t be stolen. But would it be useful for your vacuum cleaner to have a crush on you? Probably not.

In 2008, Nokia developed an anthropomimetic robot with an ‘imagination’. What further developments have there been in this field and are we any closer, or have we achieved a robot with a stream of consciousness? How does your emotion software sit within this field? Submitted by Luke

Our system detects the changes in human faces when we make expressions. Often we make expressions to signal our emotions to other humans, it’s a kind of useful social signaling code. Our robots then take the facial expression and react to it given a set of rules we have programmed. At present it’s as simple as that, that’s all our robots need to do to be useful ‘slightly socially aware’ tools at present.

Of course looking at the outside of the robot it’s easy for humans to believe there is a lot more going on, our brains love to process faces and social signals and build stories, making things seem more ‘human’ than they are. We all assume that there is a stream of consciousness going on in others when we observe them, and that imagination is in there too playing a part. All of this comes from the electrochemical signals swirling in our brains, and no one really understands how that all works yet, so it would be difficult to build a robot to mimic it directly.

Of course we can build part of the computer program that takes input patterns and looks for similar patterns in stored or previously learned patterns, or just creates new random patterns to output and call it ‘imagination’, and in some way it is doing a bit of what our imagination does. These sorts of experiments are useful because it lets us test our understanding, and perhaps over time as new facts about the brain emerge we can refine these to make them more human. But it’s a long road ahead.

Apple’s recent iPhone update includes what some are claiming to be the first consumer robot. How relevant is this and do you think this idea of robots in our pockets will have a wider impact on robotic research and developments?

Robots = tools, if it’s useful for us to have robots in our pockets, or on our phones, even if they have limited abilities, they can be developed. What we can cram onto the processors in phones today is limited, but as the technology improves we will be able to do more. But in the end what will drive it will be the need for that tool to be able to help us do something that’s useful for us.

What are the ongoing cultural implications of your latest research and how do you think they will affect everybody’s day to day lives?

The view of robots differs in different parts of the world. In Japan for example where robots are big business they are seen as a positive thing. In the west there are more mixed feeling perhaps coloured by the frequent portrayal of robots as sinister baddies in movies and TV shows. As part of (our research project) LIREC we are looking at people’s concerns, hopes and fears for the technology that we are developing, and that’s important. This way we can design robots in a way that they become beneficial technological aids to improve human life, tools to make things better.

One comment on “In Interview: Peter McOwan

  1. Hi, just left a post on my blog on this article.

    I have an exhibition shortly in Old Broad Street exploring this same subject with vintage robots, emotions and the inanimate, and how we instill the inanimate with our projected emotions. Also like to explore if who came first the robot or the human! website for images https://vintagerobotimages.blogspot.com

Comments are closed.