Skip to content

By a guest author on

Could a robot learn to learn?

The quest for artificial intelligence is gathering pace, with research groups worldwide in both universities and industry making huge advances in the development of sophisticated neural networks – inspired by the architecture of the brain – that may one day give computers the capacity for independent thought.

Recent developments in machine learning and the proliferation of smart devices interconnected via a global high-speed network are already starting to enable far greater interaction between machines and information – what many call big data. Researchers are aiming to build systems that learn and adapt from experience, to reduce the need for human programmers to laboriously develop bespoke code for each new application of a program.

One of the key engineering challenges is to find effective ways to draw together the vast quantities of data embedded in the internet so that machines can learn to explore and exploit information – an issue raised in a recent IET/Royal Academy of Engineering report Connecting Data.

Breaking the code

As the world transitions to storing information in digital form, subject to digital control systems, there is more demand for programs that can automate repetitive tasks. One issue is checking for mistakes in code, to which engineers like Professor Dino Distefano have an answer. Co-founder and CEO of Monoidics and Software Engineer for Facebook, he will tell an audience at Ingenia Live (an event held by the Royal Academy of Engineering’s Ingenia magazine) how he uses a static analyser to verify code, but using advanced logic that can keep up with the latest developments. The same event will see Dr Marily Nika, Engineering Program Manager at Google, explain the impact coding technology is having on our lives, and how close we are to real-life computing approaching the capabilities seen in science fiction. Dr Danny Tarlow of the Machine Intelligence and Perception group at Microsoft Research will look at recent research that aims to make everyone a programmer, even without them realising it!

Gaming a solution

Earlier this year, the British company DeepMind made headlines around the world with AlphaGo, the first computer program ever to beat the world’s top professional player – Lee Sedol – at the fiendishly complicated game of Go. With more board configurations than there are atoms in the Universe, this ancient Chinese game cannot be solved simply by using brute force algorithms that try out millions of options to determine their next move. Instead, AlphaGo uses deep neural networks to mimic expert human players, and further improves its own performance by learning from games played against itself.

DeepMind founder Dr Demis Hassabis spoke last month about his work, and its implications for the future of AI, at the Royal Academy of Engineering. The event was introduced by Roger Highfield of the Science Museum, and is available to watch here.

Copying the human brain

The Human Brain Project is a ten-year European Commission-funded flagship programme that aims to put in place a cutting-edge, ICT-based scientific Research Infrastructure for brain research, cognitive neuroscience and brain-inspired computing. With 116 partner universities and organisations across Europe, it aims to accelerate scientific understanding of the human brain, make advances in defining and diagnosing brain disorders, and develop new brain-like technologies.

For example, Manchester University’s contribution to the project, led by Professor Steve Furber FREng, ICL Professor of Computer Engineering at the University of Manchester, is SpiNNaker (Spiking Neural Network Architecture), a computing platform made up of 500,000 microprocessors which emulates the way brain neurons fire signals in real time. SpiNNaker can be used to accurately model areas of the brain, and to test new hypotheses about how the brain might work. Because it runs at the same speed as the biological brain, it can be used to control robotic systems, providing ‘embodiment’ for the brain models. This biological approach to robot control is very different from the algorithmic systems more commonly used in robotics.

Last week saw the launch of Compbiomed at University College London, an EU project which aims to use high performance computing for modelling in biology to make sense of big data arising from genomics and other fields.

The Science Museum is currently showing an exhibition on big data – Our lives in Data – and will explore our fascination with humanoid robots over the last 500 years in Robots, a new exhibition opening in February 2017.

Beverley Parkin is Director of Policy and External Affairs at the Royal Academy of Engineering.