Skip to content

By Roger Highfield on

We need a Big Conversation about AI

Roger Highfield, Science Director, highlights how we need new ways to engage bigger audiences in discussions about the future of artificial intelligence.

We all need to talk about AI.

We need a big conversation, one that involves everyone, not just because this technology is changing our future but also because we need to inspire more people from diverse backgrounds to guide its development.

We need a national conversation because the most important conversations of all take place between experts and the public, not between experts, whether ethicists, academics or coders.

But, as companies such as Samsung do more to engage with audiences, and new industry initiatives emerge, the trigger for the biggest public conversations is often when a new science or technology makes the headlines, often when things go wrong.

Few people were interested in infectious proteins called prions until BSE, or ‘mad cow disease’ (a phrase coined by a colleague of mine at The Daily Telegraph) became a threat to a national herd and then, through vCJD, to people.

Louise Brown’s birth electrified the world to the possibilities of reproductive science, a discussion that was revived again with the birth of Dolly the sheep, mitochondrial transplants and the use of gene editing, CRISPR.

Jumper produced with wool taken from Dolly the sheep, England, 1998. © Science Museum Group Collection.

Now we are in the throes of an AI revolution.

Never before has there been more need for public engagement with AI, even though the field stretches back many decades.

Algorithms are already making decisions about medical treatments, probation or mortgages as well as running online searches to serve up customised online adverts based on past activity.

Even though we are in the era of ‘narrow AI,’ where algorithms can only outperform us on specific tasks, this has vast potential in diagnosis, personalised medicine, optimising energy use and in transport too, as shown in our exhibition, Driverless: Who is in Control? , which was supported by MathWorks, DLG and PwC alongside Samsung – another sign of a keen interest in this field.

Visitors in front of Robocar in ‘Driverless’ © Giulia Delprato, Science Museum Group

We are already entering uncharted waters, from the use of AI to create fake videos, known as deepfakes, to the recent discovery that facial recognition technology can fail to recognise the features of some people with very light or dark skin or the deep unease about autonomous weapons.

Though ‘narrow AI’ is a far cry from the general AI seen in movies such as Ex Machina, or the sentient killing machines of Terminator, Hollywood does at least help the public understand the broader issues. I once talked to Michael Crichton about how scientists challenged the premise of Jurassic Park and he replied that his movie had done far more than them to get the public to discuss genetic technologies. He was correct, and there are efforts to harness this approach.

Mr DNA from Jurassic Park (1993) © Universal Pictures.

Yet the reality of AI is more nuanced than ‘narrow’ and ‘general’.

Data is power, yes. But deep learning can easily be fooled.

Data are worthless without gathering the right kind. For example, we only tend to take snapshots of the state of the human body, when blood pressure and other factors are subject to circadian rhythms, long term changes in gene use and so on.

There are already examples of how bias has crept into automated systems, for instance in decision-making software used by US hospitals.

Big data is best used with accompanying deep understanding. But too few students are trained to understand the theory of dynamical systems needed to describe biology, for example. There are also profound limits to digital computers for some complex systems.

Processor Unit, with integral twin disk drives and monitor, Apple, Cupertino, California, 1983.

The good news is that there are many laudable attempts to start a Big Conversation about AI.

The Government has set up the Centre for Data Ethics and Innovation, which will look at issues such as how regulators should treat targeted social media advertising if there’s evidence that it fuels unhealthy behaviours, and is investing in digital skills.

There is also an AI Council under Tabitha Goldstaub, co-founder of CognitionX, which aims to overcome barriers of AI adoption in society.

Dame Wendy Hall, Regius Professor of Computer Science at the University of Southampton and AI Skills Champion, stressed the Council “will seek to improve the understanding of AI across the UK to encourage diversity across the sector.”

The Ada Lovelace Institute has a mission to ensure that data and AI work for people and society. UCL, where I hold a visiting professorship, recently launched AI for People and Planet at the Science Museum, articulating the belief that the purpose of innovation in the sector will ultimately have a positive impact on the planet. This is a shared focus of the 100 researchers who will work at the UCL Centre for Artificial Intelligence in Holborn, directed by Prof David Barber.

The museum holds the collections of the environmental scientist James Lovelock who,  in his latest book, Novacene: The Coming Age of Hyperintelligence,  argues that machines will evolve to outperform us by the end of this century but, reassuringly, will still need humans just as we need plants.

James Lovelock inside the Science Museum’s ‘Unlocking Lovelock: Scientist, Inventor, Maverick’ exhibition.

But, as Geoff Mulgan, CEO of NESTA, has pointed out, ‘there has been too much talk about interesting but irrelevant future questions, and not enough about harder current ones.’

“Where there has been serious impact on the ethics of AI, it has mainly come from journalism (like the exposure of Cambridge Analytica), activism (like the many moves against autonomous weapons), bureaucrats (like GDPR), or detailed academic analysis of abuses. Not much has been achieved by the professional AI ethicists.”

The Science Museum has played its part in engaging the public with AI, from its Driverless exhibition to celebrating the life of the father of AI, Alan Turing, to hosting a conversation about AI between Brian Cox and Eric Schmidt, technical advisor of Alphabet Inc, the Government’s Chief Scientific Advisor, Bill Gates and will.i.am, and by aiding the launch of the first AI online test of intelligence.

will.i.am and Bill Gates at the Evening Standard’s Progress Conversation at the Science Museum.

At an event held at our Driverless Lates, Science Museum visitors joined Samsung in a conversation on how AI is shaping our future and was encouraged to question how they felt about AI technology through the eyes of a driverless car in a VR experience.

Visitors experiencing driverless VR at the Science Museum’s August Lates.

Recently, I participated in a discussion of public attitudes to AI that included Minister for Digital, Matt Warman, Teg Dosanjh, Director of Connected Living for Samsung UK and Ireland, Dr. Bhavagaya Bakshi, GP and co-founder at C the Signs, and Hannah Fry, SMG Trustee, AI expert and author of Hello World.

Hannah Fry, Associate Professor in the mathematics of cities at University College London, said: “The changes that are coming are going to affect all of us – for better or worse – and we all deserve a say.”

Minister for Digital, Matt Warman, added: “It is important that the public have faith in the technology so that we can explore its full potential.”

Teg Dosanjh, Director of Connected Living for Samsung UK and Ireland, said: “We, the tech industry as a whole, have not done a good job at making AI understandable to everyday people. People feel disconnected and unable to influence the technology that will shape the way they live in the future. This needs to change if AI is going to become a technology that benefits human beings and helps everyone in society.”

Visitors taking part in the Fair Future survey at the Science Museum’s August Lates.

Samsung’s FAIR future report, based on the views of 5,250 people, reveals just over half (51%) of all people feel that AI will have a positive impact on society as a whole, with only 16% feeling negative about our future with AI.

Almost four in ten (39%) people feel that AI will hold some form of bias, and this concern was higher (43%) in those who held a closer interest in AI.

Almost half (49%) believe that bias in AI would be unintentional, with a much lower number of people (20%) saying they felt this programming would be done on purpose. Others felt that the AI itself would form biases, with 28% feeling that these could develop on their own.

© Samsung for Fair Future.

More than a third (36%) of adults feel they are being left out of the AI conversation currently. It’s even worse for teenagers. They’re feeling pessimistic about their ability to influence how AI is used, with well over half (58%) saying they feel they will have no influence on how the technology develops.

People feel that, as AI is likely to be used to make decisions that have an ethical component, everyone should have a say in how it develops. Over three quarters (76%) of people feel this way. And it is even stronger for those who say they trust technology companies (86%) and those who believe AI will have a positive impact on society (84%).

In short, there are many admiral efforts underway but there is also a pressing need for new ways to engage everyone to help build trust in this remarkable new technology. We all need a say in how AI is applied.