Skip to content

By Roger Highfield on

Supercomputer bid to create the first truly personalised medicine

Roger Highfield describes a milestone supercomputer simulation that provides a glimpse of the future of medicine.

 

As the Science Museum prepares a new exhibition Our Lives in Data, Roger Highfield describes a milestone supercomputer simulation, which ended today, that provides a glimpse of the future of medicine.

A project that aims to realise a long-held dream of personalised medicine – designing a treatment so it is customised for an individual patient – has used one of the biggest supercomputers on the planet to generate vast amounts of data on how drugs work.

Many different tasks are usually tackled simultaneously by supercomputers but for this proposal to work, the University College London team had to monopolise SuperMUC, a cluster of two groups of computers – 250,000 in all – capable of more than 6.8 Petaflops (ten to the power of 15 Floating Point Operations per second) run by the Leibniz Rechenzentrum, Leibniz Supercomputing Centre, LRZ, near Munich, Germany.

‘This is one of the biggest supercomputers in Europe – more than double the size of anything in UK – and we need this extraordinary computational power to design personalised medicines in the sort of time that a doctor would take between diagnosing a disease and then selecting an off the shelf medicine for a patient,’ explained the head of the team, Prof Peter Coveney of UCL’s Chemistry Department.

“We were given the whole machine at the start of a maintenance shutdown and our work took place over a 37 hour period starting at 5 p.m. UK time last Saturday 11 June and ended at 6 a.m. this morning, Monday 13 June. We think it consumed 8.6 million core hours – that is equivalent to something like a quarter of a million people beavering away on everyday personal computers for 37 hours and generated about 5 terabytes of data, that is two to the power of forty bytes.”

SuperMUC Phase 1 and Phase 2in the computer room. Credit: Leibniz Supercomputing Centre
SuperMUC Phase 1 and Phase 2in the computer room. Credit: Leibniz Supercomputing Centre

“I’ve likened the entire effort to preparing a battle plan and an order of battle as a general would before engagement. During the effort, some bits of the campaign unravelled, but with new jobs coming in to take their place the machine was largely occupied all the time.

We achieved all our objectives and more because the jobs in the workflow ran much faster than anticipated. Preliminary analysis indicates that the simulations will provide insight into how the two most common mutations responsible for acquired resistance to major anti-breast cancer drugs (such as Tamoxifen and Raloxifene) interfere with drug binding.”

Even by the standards of supercomputer scientists, this was a huge job (technically, it is called a workflow, meaning a cascading series of many parallel and serial computations) and so ambitious that Prof Coveney has been trying to encourage his colleagues by offering a Methuselah of Champagne as an incentive. He thinks that a number-crunching job of this type and magnitude ‘may well be unprecedented anywhere.”

The results are now being analysed. The purpose of this project is to demonstrate that scientists can work out the way that a candidate drug will act on a target in the body – a protein – and in a matter of a few hours. “Fifty drugs and candidate drugs were studied to determine how they bind with protein targets in a range of disease cases, in order to rank their potency for drug development and for drug selection in clinical decision making,” he explained.

If it works, it would mark an important advance for personalised medicine, that is medicine designed with one patient in mind so that it works efficiently, and without side effects. Indeed, “it is taking far too long and costing far too much to discover new drugs by conventional experimental means; computer-based methods are the way forward,” said Prof Coveney.

“Now that we can read the genetic makeup of a person relatively cheaply we should be able to develop and target many more drugs to specific individuals,” he explained. “Most existing drugs are not effective against large swathes of the general population. In such clinical contexts, decisions regarding matching a drug to a patient will need to be taken within hours to effectively treat individuals and that is what we are trying to achieve.”

The UCL team is not just ranking existing and future possible drugs but also generating leads for new/improved drugs. Science advances with new data but you need understanding too, and the insights into how drugs work that will arise  from the breathtaking scale of Prof Coveney’s simulations are only possible because his team is modelling the atomic details of how drug molecules actually work in the body.

Scientists have been talking about turning genetic data into personalised treatment for a couple of decades and talking about using computers to design drugs for even longer. Now, at long last, it looks like we have reached the combination of sophisticated software and hardware to be able to do this in hours rather than years and, as computers become ever more powerful, so this technique will become more useful.

This project is a product of long-standing collaboration between Professor Coveney (Director of UCL’s Centre for Computational Science) and the Leibniz Rechnenzentrum, where his colleague Prof Dieter Kranzlmüller is deputy director.

Much of this has been supported by European Union funding. Their work is part of a EU funded project called ComPat, a €3 million project to support emerging high performance computing technologies; it is also a forerunner of an EU Centre of Excellence, run by Prof Coveney, who recently was awarded €5 million.

Roger Highfield is the co-author of “Big Data Needs Big Theory Too”, with Prof Ed Dougherty and Prof Peter Coveney, presented at the Solvay Symposium, Brussels, Belgium, 19-21 April 2016, and described in the Conversation.