Categories
All posts Alternative Careers

Automated Deep Learning For Physicians

Deep learning is becoming an integral part of healthcare and helping to shake up this 1970’s zeitgeist. The first time I heard about deep learning was in image classification of retinal images.

Imagine you’re working in the Urgent Care and you are reviewing a set of labs from a patient and doing a wet read on a Chest X-Ray. Obviously, you’ll recognize hepatitis and pancreatitis from the lab work but you might miss subtle development of ARDS on the CXR.

Deep learning is a way for technology to help you as a physician. It’s at the core of Artificial Intelligence and it can recognize patterns and connections which you might otherwise miss.

It’s not that AI is going to replace the doctor. It might just be a software running in the background alerting you that “Hey, doc, consider ARDS in this patient based on the findings on the CXR and the labs. 96% of patients with this kind of a CXR and this lab pattern will have concomitant ARDS or develop it within 24 hours.”

Physician Learning of Diagnostics

Diagnosis makes money and so we have transformed a fleet of highly educated medical professionals into diagnosticians. We know what symptoms to look for, what tests to order, and how to interpret the results.

In medical school we learn the basic science. In residency we watch our attendings perform the task properly. And later, under their supervision we attempt to do the same and will get feedback on what we did wrong.

We are eventually set loose on the public to come up with diagnoses. In the real world, of course, we continue learning by learning from our own mistakes. But our backwards medical systems believes that physicians should make no mistakes.

Ergo, defensive medicine.

Machine Learning of Diagnostics

In Artificial Intelligence (AI), a computer program looks for specific patterns and data structures in a specific patient presentation – such as in an xray image.

It does this by using its artificial neural networks (ANN’s) which are supposed to mimic human brain neural connections. ANN’s are adaptive and can change its structure based on more data.

These neural networks are used in healthcare in order to recognize patterns and predict diseases. They can do this independently from human input; ANN’s can learn from tiny little bits of data which you and I might not even be aware of and can then form complex global behaviors.

The premise is that you let a machine learning tool loose on some clinical images and it can identify the right patterns to not only correlate diseases with the images but also make disease predictions.

This is the exact kind of work I’m involved in with one of my healthcare consulting clients – creating the right model to help us predict chronic diseases from specific EHR data.

Take my Healthcare Consulting Basics course if you’re interested in getting into any sort of consulting – regardless of the topic. You’ll support my work and you’ll create another income source for yourself.

Coding Knowledge

When it comes to my cell phone or laptop, I can use them without any software coding knowledge. That’s because these devices were created for end-users without such knowledge.

But AI doesn’t really have a good user interface for people like me who don’t have any real coding experience. Without coding knowledge I really can’t take advantage of a deep learning tool in order to implement AI in my clinical work.

To develop a proper deep learning model I have to have access to clean clinical data, to major computing power to run the program, and I have to know how to code and understand deep statistics.

Automated Deep Learning

As the authors of this particular study point out, several companies are trying to bridge this knowledge gap by creating an interface for a physician who can use deep learning without any coding knowledge.

A “classification task” is when your goal is to classify a bunch of CXR’s as either Pneumonia or DiseaseFree. The automated deep learning application can create a prediction algorithm as long as I can train it with some labeled images, as in, a bunch of CXRs with PNA and a bunch of normal CXRs.

The prediction algorithm is what I can then use on future undiagnosed patient CXRs in order to determine if they have PNA or not. All this, without ever having to program anything; just “uploading” labeled datasets and the program is ready to run on new CXRs.

Google Cloud AutoML

The study referenced above took 2 physicians without any coding experience and taught them how to use Google’s AutoML online program which has a graphical interface.

They uploaded a bunch of images to help train the deep learning model and then uploaded images to see if the prediction algorithm did a good job.

The performance of this Automated Deep Learning method was compared to other deep learning methods where a bunch of computer nerds actually did all the nerdy programming.

In summary, it’s just as effective used this “low-tech” Automated Deep Learning as creating the whole deep learning algorithm from scratch. Which means that even laypersons like you and I can used 3rd party API’s to create our own deep learning models.

Retinal Images Results

To give you an idea, the researchers uploaded a bunch of retinal images to this Google cloud program. Some were normal retinas, others showed mild diabetic retinopathy, moderate, and severe diabetic retinopathy.

The Automated Deep Learning model was able to distinguish between normal, mild, moderate, and severe images by itself – without any input from the physicians or computer nerds.

How accurate was it? A sensitivity of 73% and a specificity of 67%. That’s pretty damn good and better than some non-retina specialist ophthalmologists.

Pediatric CXR Results

Next, the 2 doctors uploaded pediatric chest xrays into this Automated Deep Learning program from 2 groups of patients: normal pediatric CXRs and CXRs showing PNA.

The results were a sensitivity of 97% and a specificity of 100%. Damn.

Dermatologic Image Results

A known dataset of images which were already correctly labeled were uploaded into the Automatic Deep Learning tool. There were over 10,000 images which were grouped as follows:

  • AK
  • BCC
  • Benign Nevus
  • Melanoma
  • Dermatofibroma
  • Vascular Lesion
  • Benign Keratosis (SK, Solar Lentigo, LK)

This one didn’t quite perform so well according to the authors. For Melanoma sensitivity was 11% and specificity was 99%.

For Benign Nevus it was a sensitivity of 97% and a specificity of 39%.

Learning from the Machine

So, can you and I learn from this artificial machine and see why it has such good prediction rates? It seems that the answer for now is no. At least when it comes to this Automated Deep Learning model created through a simple interface from Google’s AutoML.

We can’t pry open the software and look at what part of the CXR it looked at in order to figure out which pediatric patient had a PNA and which didn’t.

Other Automated Deep Learning API’s

The authors go on to list the following list of deep learning platforms which can be used by those who don’t have coding experience.

  • Amazon SageMaker
  • Baidu EZDL
  • IBM Watson Studio
  • Oneclick.ai
  • Platform.ai

This is a great opportunity for those of us who don’t have the technological knowledge or financial means to create a machine learning algorithm from scratch.

I can take a bunch of MRI’s of shoulders and determine if my model can accurately predict who has a labral tear even without contrast injection.

The reason this works is that deep learning is more agnostic. The model wasn’t biased by which medical school or residency it attended. It doesn’t matter which attentings showed it what radiologic pearls. It looks at images and doesn’t give a fuck about the clinical diagnostic paradigm.

One reply on “Automated Deep Learning For Physicians”

Would be interesting to see deep learning applied to psychiatric illnesses and diagnoses as well.

For example — deep learning of certain pediatric metabolic data could be used to predict one’s chance of developing Bipolar Disorder later on in life.

Or, perhaps deep learning algorithm on patient’s genetic data could predict which drug would be most effective in reducing symptoms in PTSD.

The possibilities are truly endless!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.