I was reading a rather dense research article which was using machine learning to predict anti-cancer drugs for particular patients. The premise was that a doctor can make that decision but they can only process a limited amount data. A machine learning algorithm can use hundreds of thousands of data points to make a far more accurate prediction.
It’s not just in oncology, though that’s where the majority of the money is being spent, where we’d want the help of a machine to decide on management. Imagine if we could accurately predict which patient would do best with a joint replacement and who will suffer complications.
So I titled this post “The Definition of Health” because artificial intelligence in healthcare needs some parameters. We need to tell the “machine” something in terms of guidelines it can use to make its decisions. It will either be a human being or a large corporation who will enter their own biases into the system. So it matters greatly what that entity’s definition of health is.
Definition of Health in Clinical Outcomes
Take something specific, like the outcome of a cardiovascular resuscitation. Currently in medicine if you revive the patient you’re considered successful; after all, the goal of a resuscitation is … resuscitation. Your colleagues will high-five you and family members will hug you and so on.
But we all know that the chance of living a good life after coding is quite low. Usually you’ll code again or die of something else. It probably means that you’re going to end up in the ICU and go through a lot more torture. If you make it out of the ICU, you might end up with a long list of medications and need to see the doctor every week for the next few months. It will be a massively life altering event.
If I program the AI that obtaining a pulse after the patient codes is the end-point, then the algo might find all sorts of little tricks to achieve that goal. Suddenly our resuscitation success goes from 45% to 75%. But what did we really achieve for the patient?
A similar discussion can be had with antihypertensive therapy in an end-stage cancer patient. Or statin therapy in a bed-bound nursing home patient. All of these are standard practice in our current healthcare model. One reason is that the doctor isn’t the one who decides what health is – we have the ICD codes for that.
Diagnosing Coronary Artery Disease
I do healthcare consulting for a company where we predict which patients have CAD. That’s our goal. The more people we successfully identify, the more successful we are.
What does this potential diagnosis mean for the patient? They will be brought in, they will have an invasive test performed to diagnose the CAD and they will be started on medications. Sure, we might recommend lifestyle modifications as well, but we know that such methods are generally ineffective in medicine.
Will their statin and antihypertensive medication decrease their progression from atherosclerosis to an MI? I used to think so but now that I’m wiser, no. They will instead have more interventions such as CABG’s and Angioplasties which will keep them from potentially having a deadly heart attack; their atherosclerosis however will continue to progress even with those interventions.
And to prevent 1 deadly heart attack we’ll have to stent many patients who would have otherwise done well. It’s a bit of a shotgun approach. And that’s definitely not my definition of health. So if an algorithm says that I, for example, am at high risk for CAD and should be screened, do I want to undergo screening if the next recommendation is a coronary intervention?
Have we as a profession defined the definition of health universally? If not universally, have we defined what it is for each patient individually? I’ve never once in my career asked what the definition of health is for a patient. I’ve only recently answered that question for myself when writing this post.
Could AI Figure it Out?
Could a machine learning algorithm figure out that definition on its own? Could it figure out what the best end-result is for a patient? That’s not how AI works. It’s not a standalone, sentient system.
AI can create art and music based on the parameters we provide it (synthetic data). But this is a circular problem we’re solving. We already know what beats harmonize with the human ear, so we are asking the machine to create more such beats.
But we could create a machine learning model to decide what defines a good life. We can follow the lives of individuals and determine if they are living fulfilling lives. We can quiz people on their deathbed. We can gain insight into how people perceive their lives during an illness, after a cure, or when a treatment fails.
But would we do that? Would such research make sense if it uncovers that patients actually prefer to live better quality lives with less intervention? Dunno. That seems to go against how recent research has been conducted. It would have a very low financial incentive.
The Human Bias
It’s not a terrible thing to inject our own biases into a software or a computer. It’s not like we can’t go back and change the assumptions later. But the fact that we haven’t even addressed these topics puts the cart before the horse.
There is no way to create a software algorithm without a bias built into it by whomever designed it. The information we then obtain from this machine learning model will only be as applicable as that bias. The bias becomes the bottleneck.
This is the intersection of philosophy and medicine. What is health? What is the value of longevity? At what cost should an intervention be recommended? What is the role of the physician in medicine?