Nvidia's AI Supercomputer For Medical Research And Drug Discovery
Written by Nikos Vaggalis   
Wednesday, 18 November 2020

Last month Nvidia unveiled plans to build a supercomputer intended for AI research in health care. This prompts us to look at AI's potential role in health care and how it is already being used.

Update: Nvidia kindly pointed out some inaccuracies in our original report, now rectified. The main substantive change is that Nvidia is currently building two computers - an Arm-based one in Cambridge and the Cambridge-1 supercomputer,  which will be located in the Cambridge-London tech corridor, and  is expected to be in place by Q1 2021.

Two things have been made clear due to the pandemic. Firstly, that a strong, national, and free-for-all public health system is imperative, and secondly that spending on advancing research on medical and health care applications should be a prime priority for both nation states and the private sector alike.

For the latter, the focus is turning to AI. For instance, if the AI  in place had been more advanced, then maybe a Covid vaccine would have required much less to invent, as such saving thousand of lives.

Nvidia, which in September announced that it was paying $40 billion for the UK chip manufacturer ARM, based in Cambridge, is making just such an investment - building a $52 million (£40 million) Arm-based supercomputer in Cambridge. On top of this Nvidia is building the "Cambridge-1" supercomputer, which will be located in the Cambridge-London tech corridor and is excepted to be in place by Q1 2021. Does the naming scheme reveal that there's going to also be a "Cambridge-2" and so on? It could well be the case given that the acquisition is said to have been driven by NVIDIA wanting to design an AI-focused data-center platform.


The new supercomputer will be accessible by researchers and institutions, initially a few London hospitals, King’s College, AstraZeneca, GlaxoSmithKline and Oxford Nanopore, for healthcare applications and drug discovery,including Covid related research.

The current state of AI on medical research

First let's make clear that AI is not fully autonomous, nor can it be trusted to make decisions by itself, overriding human doctors. As such it won't wipe out radiologists and other medical professions, for the time being that is. Why is that?

Take for example SkinVision, a mobile app that by taking a picture of a mole can decide if it's malignant or not. Making an incorrect diagnosis, misinterpreting a malignant mole as benign, could have dire consequences. But the other way around is not without defects as well. It would cause uninvited stress to its users and turn them into an army of pseudo-patients who would come knocking at their, already burned out, practitioner's door.

For such an AI algorithm to be successful, it's of foremost importance to be able to replicate the doctor's actions. In other words, it has to be able to act as doctor, leveraging his knowledge:

The algorithm checks for irregularities in color, texture, and shape of the lesion. It indicates which skin spots should be tracked over time and gives it a low, medium or high-risk indication within 30 seconds

But since we are still not able to put 100% trust to the algorithm itself, the doctor's intervention is still necessary in order to manually verify the findings. As such the algorithm complements the doctor, it does not replace him :

Our dermatologists perform continuous quality control of the assessments, by evaluating the output of the risk assessment with their professional experience. All high-risk photos receive additional personal advice from our doctors on next steps to take within two working days stating whether they should rest assured, continue monitoring the lesion or seek immediate medical attention

But why is it necessary for the algorithm to be blindly trusted, for the diagnosis to be autonomous?

Across the globe, health systems are facing the problem of growing populations, increasing occurrence of skin cancer and a squeeze on resources. We see technology such as our own as becoming ever more integrated within the health system, to both ensure that those who need treatment are made aware of it and that those who have an unfounded concern do not take up valuable time and resources. This integration will not only save money but will be vital in bringing down the mortality rate due to earlier diagnosis and will help with the further expansion of the specialism

Skeptics of the 4th Industrial revolution would say that machines will take doctors' jobs. But it's not about that; in reality it's about automating the diagnostic process so that it can be done quicker as well as reach farther. Or, as Derek Lowe, a longtime drug discovery researcher, tells the New York Times :

It is not that machines are going to replace chemists. It’s that the chemists who use machines will replace those that don’t.

Then again, why not also let AI turn the tables on the physician pandemic burnout too?

Nearly half of the world’s 10 million physicians had symptoms of burnout, including emotional exhaustion, interpersonal disengagement, and a low sense of personal accomplishment. It continues to negatively influence the quality of care and shorten the lifetime a physician is able to practice medicine.

This is a dire situation which can be somewhat remedied by employing AI :

we expect artificial intelligence to significantly reduce the administrative burden and improve medical professionals' work experience in the future.

Then again "can we trust a decision if we don’t understand the factors that informed it ?"

For a doctor to reach a diagnostic conclusion, he has to take several factors into account. For example a typical clinical report would include information like: 

there is an irregular mass with microcalcification in the upper outer quadrant of the breast. Findings are consistent with malignancy

Irregular mass and microcalcification are the factors that have directed the doctor to reach the conclusion that the findings are malignant. Correlating to the neural network, can we observe which factors and to what extent (weight) are been taken into account for the NN to reach its own conclusions?

The most common approach to explaining what a model is looking for when making a decision is to generate visualizations of the specific features that it detects. Each neuron in a neural network learns a specific filter, a feature detector that responds to a certain pattern in the images.Hallucinations are visualizations of the features themselves, what each neuron in the model is looking for.

But there is a huge problem when it comes to offering explanations with these methods; you still need to interpret the pictures that come out.But it isn’t really obvious what the key elements of the decisions are

That is, despite all efforts it's still too difficult to understand how a machine thinks and validate it too.

With that caveat set, let's take a look at some of the current applications of AI on healthcare.

Sorting out viruses with machine learning

Researchers from Japan have demonstrated a new system for single-virion identification of common respiratory pathogens using a machine learning algorithm trained on changes in current across silicon nanopores. This work may lead to fast and accurate screening tests for diseases like COVID-19 and influenza.

Artificial intelligence model detects asymptomatic Covid-19 infections through cellphone-recorded coughs

An AI model that distinguishes asymptomatic people from healthy individuals through forced-cough recordings, which people voluntarily submitted through web browsers and devices such as cellphones and laptops.

IBM and Pfizer claim AI can predict Alzheimer’s onset with 71% accuracy

Pfizer and IBM researchers claim to have developed a machine learning technique that can predict Alzheimer’s disease years before symptoms develop. By analyzing small samples of language data obtained from clinical verbal tests, the team says their approach achieved 71% accuracy when tested against a group of cognitively healthy people.

The role of artificial intelligence in cancer treatment

Since rituximab was approved by the FDA in 1997, there have been more than 40 targeted therapies on the market. But they only cover a dozen of targets. The number of cancer-causing genes is in thousands, so there is a lack of medicines. Scientists have put forward the idea of “developing new treatment potentials with old medicines". For example, the failed chemotherapy drug Azidothymidine was also used to treat HIV infection. Commonly used drugs may also act on cancer targets.

To match the drug and the target usually needs to test again and again through experiments. Thanks to the increase in computing power and machine learning, a new possibility was there in solving this problem, that is, to replace the natural scientific process of random discovery with algorithms. Through certain algorithm, the platform can match more than 1,400 FDA-approved drugs with about 10,000 potential targets.

Machine learning uncovers potential new TB drugs

Computational method for screening drug compounds can help predict which ones will work best against tuberculosis or other diseases.

Machine learning model helps characterize compounds for drug discovery

Purdue University innovators have created a new method of applying machine learning concepts to the tandem mass spectrometry process to improve the flow of information in the development of new drugs.

University of Minnesota develops AI algorithm to analyze chest X-rays for COVID-19

When a patient arrives in the emergency department with suspected COVID-19 symptoms, clinicians order a chest X-ray as part of standard protocol. The algorithm automatically evaluates the X-ray as soon as the image is taken. If the algorithm recognizes patterns associated with COVID-19 in the chest X-ray — within seconds — the care team can see within Epic that the patient likely has the virus.

Using reinforcement learning to personalize AI-accelerated MRI scans

A method leveraging reinforcement learning to improve AI-accelerated magnetic resonance imaging (MRI) scans. Experiments using the fastMRI data set created by NYU Langone show that our models significantly reduce reconstruction errors by dynamically adjusting the sequence of k-space measurements, a process known as active MRI acquisition.

 And for those wanting to pursue the matter from an IT setting, there's the free Machine Learning for Healthcare by MIT to get you started on the field.

 nvidia1icon

More Information

NVIDIA Building UK’s Most Powerful Supercomputer, Dedicated to AI Research in Healthcare

NVIDIA to Acquire Arm for $40 Billion, Creating World’s Premier Computing Company for the Age of AI

Machine Learning for Healthcare

Related Articles

IBM, Slack, Watson and the Era of Cognitive Computing

$200 Million Investment In IBM Watson IoT

 

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Eclipse IoT Developer Survey 2024
04/12/2024

The Eclipse Foundation’s IoT Working Group has released the results of its 2024 IoT Developer Survey. Industrial automation and automotive are now the leading industry sectors and connectivity is th [ ... ]



Google Adds Premium Tier To Developer Program
29/11/2024

Google has added a premium tier to the Google Developer Program. The new tier is described as providing "a tailored suite of services to help developers throughout the learning, building and deploymen [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Monday, 23 November 2020 )