TCAV Explains How AI Reaches A Decision
Written by Nikos Vaggalis   
Wednesday, 13 February 2019

Why is it important to understand the inner workings of a neural network? Read on to find out and to be introduced to Google's machine-to-human translator tool, TCAV (Testing with Concept Activation Vectors) 

As AI becomes more and more integrated into all aspects of human activity and life there's a pressing need to find a way to peek into its decision making process.This is very important in sectors such as Healthcare, that are critical to humans' wellbeing.

Take for example SkinVision, a mobile app that by taking a picture of a mole can decide if its malignant or not. Would the diagnosis be incorrect or misinterpreting a malignant mole as benign could have dire consequences.But the other way around is not without defects as well.It would cause uninvited stress to its users and turn them into an army of pseudo-patients who would come knowing down their already burned out practitioner's door.

For such an AI algorithm to be successful, it's of foremost importance to be able to replicate the doctor's actions. In other words it has to be able to act as doctor, leveraging his knowledge:

The algorithm checks for irregularities in color, texture, and shape of the lesion. It indicates which skin spots should be tracked over time and gives it a low, medium or high-risk indication within 30 seconds

But since we are still not able to put 100% trust to the algorithm itself, the doctor's intervention is still necessary in order to manually verify the findings. As such the algorithm complements the doctor, it does not replace him :

Our dermatologists perform continuous quality control of the assessments, by evaluating the output of the risk assessment with their professional experience. All high-risk photos receive additional personal advice from our doctors on next steps to take within two working days stating whether they should rest assured, continue monitoring the lesion or seek immediate medical attention

But why is it necessary for the algorithm to be blindly trusted, for the diagnosis to be autonomous?

Across the globe, health systems are facing the problem of growing populations, increasing occurrence of skin cancer and a squeeze on resources. We see technology such as our own as becoming ever more integrated within the health system, to both ensure that those who need treatment are made aware of it and that those who have an unfounded concern do not take up valuable time and resources. This integration will not only save money but will be vital in bringing down the mortality rate due to earlier diagnosis and will help with the further expansion of the specialism

Skeptics of the 4th Industrial revolution would say that the machines will take the doctors' jobs.But it's not about that;in reality it's about automating the diagnostic process so that it can be done quicker as well as reach farther.Or, as Derek Lowe, a longtime drug discovery researcher, tells the New York Times :

It is not that machines are going to replace chemists. It’s that the chemists who use machines will replace those that don’t.

Then again, why not also let AI turn the tables on the physician pandemic burnout too?

Nearly half of the world’s 10 million physicians had symptoms of burnout, including emotional exhaustion, interpersonal disengagement, and a low sense of personal accomplishment. It continues to negatively influence the quality of care and shorten the lifetime a physician is able to practice medicine

 a dire situation which can be somewhat remedied by employing AI :

we expect artificial intelligence to significantly reduce the administrative burden and improve medical professionals' work experience in the future.

Then again "can we trust a decision if we don’t understand the factors that informed it ?"

For a doctor to reach a diagnostic conclusion, he has to take several factors into account.For example a typical clinical report would include information like 

there is an irregular mass with microcalcification in the upper outer quadrant of the breast. Findings are consistent with malignancy

Irregular mass and microcalcification are the factors that have directed the doctor to reach the conclusion that the findings are malignant. Correlating to the neural network, can we observe which factors and to what extent (weight) are been taken into account for the NN to reach its own conclusions?

The most common approach to explaining what a model is looking for when making a decision is to generate visualizations of the specific features that it detects. Each neuron in a neural network learns a specific filter, a feature detector that responds to a certain pattern in the images.Hallucinations are visualizations of the features themselves, what each neuron in the model is looking for.

But there is a huge problem when it comes to offering explanations with these methods; you still need to interpret the pictures that come out.But it isn’t really obvious what the key elements of the decisions are

That is, despite all efforts it's still too difficult to understand how a machine thinks and validate it too.

This is where Google Brain's scientist Been Kim steps in, with her TCAV (Testing with Concept Activation Vectors) machine-to-human translator tool.

Kim and her colleagues at Google Brain recently developed a system that allows a user to ask a black box AI how much a specific, high-level concept has played into its reasoning. For example, if a machine-learning system has been trained to identify zebras in images, a person could use TCAV to determine how much weight the system gives to the concept of “stripes” when making a decision.

The ambition is to make a plugin which can be fitted into any NN and understand its reasoning to decide whether we can safely use it or not.

TCAV does this by “sensitivity testing” on the network in question. This testing can reveal which factors and to what extent have influenced the network on its decision.For example how

sensitive a prediction of zebra is to the presence of stripes", or "having two saliency maps of two different cat pictures, with one picture’s cat ears having more brightness, can we assess how important the ears were in the prediction of “cats” ? 

 

tcav2

 

The output of this test is a TCAV score that shows how important the feature that was tested was.This renders the human operator able to confirm that the machine has in fact acted just as he would had had.

Applying TCAV to image classification systems is not the only
testbed of its applications;it can be also applied to other types of data such as audio, video, sequences and more, to give insight into the predictions made by various classification models, from standard image classification networks to a specialized medical application.

As to its plugin-ability, it is a property widely desired so that instead of having to modify a working NN to accommodate for TCAV, you just attach it to it.

At the end of the day, Been Kim argues that making AI translatable is going to be the breaking point that will force humanity to either adopt or abandon AI once and for all.

Systems like TCAV are the early examples that are going to play such a role in the near future;the role of creating trusted AIs by testing them at the same par with humans that are able to perform their work without humans.

More Information

Interpretability beyond feature attribution quantitative testing with concept activation vectors (tcav) - MLConf

Interpretability Beyond Feature Attribution:Quantitative Testing with Concept Activation Vectors (TCAV) paper

Explain yourself, machine. Producing simple text descriptions for AI interpretability.

Aspiring Dermatology App Under The Microscope: The SkinVision Review

Could A.I. Turn The Tables On The Physician Burnout Epidemic?

A New Approach to Understanding How Machines Think

Related Articles

Neural Networks In JavaScript With Brain.js

Deep Angel-The AI of Future Media Manipulation

How Will AI Transform Life By 2030? Initial Report

Achieving Autonomous AI Is Closer Than We Think

Artificial Intelligence For Better Or Worse?

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


The Feds Want Us To Move On From C/C++
13/11/2024

The clamour for safe programming languages seems to be growing and becoming official. We have known for a while that C and C++ are dangerous languages so why has it become such an issue now and is it  [ ... ]



GitHub Universe AI Announcements - Copilot And Spark
30/10/2024

GitHub has announced several improvements for developers at Universe, its annual conference. Developers will get multi-model Copilot and GitHub Spark, an AI-native tool for building applications in na [ ... ]


More News

espbook

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Wednesday, 13 February 2019 )