Skip to content

By Roger Highfield on

What to think about machines that think

Roger Highfield explores what to think about machines that think.

The relentless rise of computer power has been an undercurrent in many recent Science Museum exhibits and galleries, from our £16 million Information Age gallery that the Queen opened last year to our Robots 3D IMAX film, exhibit about Ada Lovelace, visionary of the computer age, and newly-displayed Robot Bee.

The evolution of machines has prompted questions about the degree to which artificial intelligence can extend, mimic and then replace human skills ever since Lovelace, inspired by the work of Charles Babbage, envisaged machines that could represent abstract items such as symbols and musical notes.

She wrote to the pioneer of electricity, Michael Faraday, on 16 October 1844 that:

“I expect to bring the actions of the nervous and vital system within the domain of mathematical science.’

Cybernetic tortoise, c 1950. Invented by William Grey Walter. Credit: Science Museum/SSPL
Cybernetic tortoise, c 1950. Invented by William Grey Walter. Credit: Science Museum/SSPL

Alan Turing – who came up with the concept of the universal machine and published the influential 1950 paper, “Computing Machinery and Intelligence”, was fired up during a 1951 visit to the Science Museum by the lifelike behaviour of a cybernetic tortoise, which danced in front of a mirror.

The New York based literary agent, John Brockman has prompted a rich vein of responses to the issues raised by AI, which are now published in his book What to Think About Machines That Think: Today’s Leading Thinkers on the Age of Machine Intelligence (Harper Perennial) after first being aired in his Edge.org online salon earlier this year.

More than any other technology, AI forces us to confront the boundaries between machines and humans. Many articles have already meditated on how AI is proliferating, how we are now ‘cognitizing’ life, when once we electrified it, as computers become embedded invisibly everywhere, and the rise of digital technologies in the workplace, whether these advances will displace more jobs than they create, and so on.

Responses that explore many of these themes were given to Brockman from a remarkably diverse range of 192 contributors, from artist, composer and producer Brian Eno to biological anthropologist Helen Fisher, Harvard cognitive scientist and linguist Steven Pinker, yours truly and Nick Bostrom of Oxford University, whose Ted talk is here:

The volume opens with a brief introduction by Brockman which reminds readers how the Cambridge cosmologist Stephen Hawking made headlines when he noted ‘The development of full artificial intelligence could spell the end of the human race.’

There are more gloomy prognostications from Astronomer Royal Martin Rees, who writes in his response, Organic Intelligence Has No Long-Term Future, that ‘by any definition of thinking, the amount and intensity that’s done by organic human-type brains will be utterly swamped by the cerebrations of AI.’

‘Abstract thinking by biological brains has underpinned the emergence of all culture and science. But this activity—spanning tens of millennia at most—will be a brief precursor to the more powerful intellects of the inorganic post-human era,’ says Lord Rees.

‘Moreover, evolution on other worlds orbiting stars older than the Sun could have had a head start. If so, then aliens are likely to have long ago transitioned beyond the organic stage.’

Danny Hillis, Chairman of Applied Minds, Inc, points out that we have designed machines to serve the common good, but we aren’t perfect designers and they’ve developed goals of their own. Hillis calls the notion of smart machines capable of building even smarter machines ‘the most important design problem of all time’ and adds that: ‘Like our biological children, our thinking machines will live beyond us. They need to surpass us too, and that requires designing into them the values that make us human. It’s a hard design problem, and it’s important that we get it right.”

Molly Crockett, University of Oxford highlights one way to do this: ‘We’ve already built computers that can see, hear, and calculate better than we can. Creating machines that are better empathizers is a knottier problem—but achieving this feat could be essential to our survival.”

But there is a great deal of scepticism about how close we are to truly thinking machines. Seth Lloyd of MIT points out to tendency for hype and how ‘back in the 1950s, the founders of the field of artificial intelligence predicted confidently that robotic maids would soon be tidying our rooms.’

Carlo Rovelli, theoretical Physicist at Aix-Marseille University, in the Centre de Physique Théorique, Marseille, says two key questions get mixed up in the great AI debate. ‘Question 1 is how close to thinking are the machines we have built, or are going to build soon. The answer is easy: immensely far. The gap between our best computers and the brain of a child is the gap between a drop of water and the Pacific Ocean…Question 2 is whether building a thinking machine is possible at all. I have never really understood this question. Of course it is possible. Why shouldn’t it?’

The theme emerges again in the entry by Rodney Brooks, MIT emeritus professor and founder and chairman of Rethink Robotics, who writes: “People are getting confused and generalizing from performance to competence and grossly overestimating the real capabilities of machines today and in the next few decades… The fears of runaway AI systems either conquering humans or making them irrelevant are not even remotely well grounded… people are making category errors in fungibility of capabilities. These category errors are comparable to seeing more efficient internal combustion engines appearing and jumping to the conclusion that warp drives are just around the corner.”

There is also scepticism about the real dangers that thinking machines present. Daniel Dennett,  Co-Director, Centre for Cognitive Studies, Tufts University, Boston, argues that there is nothing wrong with outsourcing the drudgery of thought to machines ‘so long as (1) we don’t delude ourselves, and (2) we somehow manage to keep our own cognitive skills from atrophying.’ And he warns that as we become ‘ever more dependent on these cognitive prostheses, we risk becoming helpless if they ever shut down.’ The real danger, then, is basically witless machines being given authority that is way beyond their abilities.

Dennett adds: “The Singularity – the fateful moment when AI surpasses its creators in intelligence and takes over the world – is a meme worth pondering. It has the earmarks of an urban legend: a certain scientific plausibility (‘Well, in principle I guess it’s possible!’) coupled with a deliciously shudder-inducing punch line (‘We’d be ruled by robots!’)… But, he concludes that

‘The real danger… is not machines that are more intelligent than we are usurping our role as captains of our destinies. The real danger is basically clueless machines being ceded authority far beyond their competence.’

And there are those who welcome the prospect of smart machines. Mark Pagel of Reading University says: ‘as we design machines that get better and better at thinking, they can be put to uses that will do us far more good than harm.’

Others are not so sure that machines are ever destined to surpass human intelligence. University of California, Berkeley, psychologist Alison Gopnik responds “Learning has been at the centre of the new revival of AI. But the best learners in the universe, by far, are still human children.” Another reason to think we will keep one step ahead is, of course, that we will can learn from machines as much as people. Terrence Sejnowski of the Salk Institute, La Jolla, concludes: “as more and more cognitive appliances are devised, like chess-playing programs and recommender systems, humans will become smarter and more capable.”

Another thread running through many of the answers, including my own, is the question of what constitutes ‘artificial’ intelligence (a slippery concept) and how we draw the line between machine thought and human thought when we are increasingly augmenting ourselves with a range of technologies. Caltech theoretical physicist and cosmologist Sean Carroll puts it succinctly:

‘We are all machines that think, and the distinction between different types of machines is eroding.’

The transformative impact of the web, represented in the museum by the world’s first server (Sir Tim Berners-Lee’s NeXT computer), has led to speculation about a ‘hive mind’ and, along with the  rise of machine learning and tools such as Siri and Google Translate, the answers to these questions are today more urgently sought than ever.