All Articles

Review of ChatGPT MD by Dr. Robert Pearl

Any physician who puts in the effort to write about new technology in healthcare deserves praise. But you have to do your research if you are going to charge money for your book. Here is my review of ChatGPT MD by Dr. Robert Pearl, who by the way, I enjoy following on LinkedIn because he’s a smart fellow. As for the book … check it out.

I listened to the audiobook of ChatGPT MD and the premise is that Generative AI will completely change healthcare. Specifically, it will allow the physician to spend more time with each patient.

ChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American MedicineChatGPT, MD: How AI-Empowered Patients & Doctors Can Take Back Control of American Medicine by Robert Pearl
My rating: 2 of 5 stars

View all my reviews

Predicting the Future

I do say a lot that the future of medicine is virtual care. I am predicting the future and the reality is that nobody cares why you’re trying to predict the future. But if you are going to predict the future, you must make a solid case.

So, Dr. Pearl, let’s talk about ChatGPT taking over medicine, as predicted in this book. In all fairness, the author believes that in order for that to happen, the following stakeholders also must change:

  • physicians
  • patients
  • politicians
  • health insurance
  • legislature
  • regulators

When is the last time we had technology for which everyone in unison came together to change? Not antibiotics, not EHR, not robotics, not vaccine technology.

ChatGPT & Patient Data

Why would ChatGPT somehow be the future of medicine even though Generative AI is a technology much like search engines used to be a technology and nobody could predict that Netscape and Yahoo were going to be replaced by Google.

If it is going to be ChatGPT, where will they get all this supposed data based off of which GPT will make its individualized patient recommendation? Somehow the patient will be allowed to enter that data and the government won’t interfere?

Healthcare has regulated every single thing when it comes to patient data. In fact, your data hasn’t been yours since PHI and HIPAA. If GPT gets your data what makes you think that other patient data is entered correctly, accurately, or honestly?

Many talk about our Digital Twins. I get it and it’s as sexy as when Dr. Topol predicted that genetics will lead the next wave of clinical medicine. But to have Digital Twins you need liberation of patient data which can’t happen in a healthcare system where the data is held hostage so that the mega-healthcare companies can profit even more.

In fact, Dr. Pearl should know this concept, he was the CEO of Kaiser Permanente. I don’t think anyone would call KP the poster child of individualized care with full transparency. And yes, I’m butthurt by KP because I had a fall-out with them – so take my butthurtness with a grain of salt.

Performing Surgery

The author claims that Gen AI has the capability to learn a surgery as well and better than a surgeon. That’s great but that’s not really what the problem is in medicine. So, I’m curious, did the author think through this prediction?

Let’s say a few Davinci robots can do 1,000,000 surgeries a day. But surgeries aren’t tough for surgeons. What’s tough is the practice of medicine. What sucks is healthcare, the hospital system, prior auth, follow-up, complications, supply shortage, etc.

We haven’t yet trusted self-driving cars but somehow we’re gonna let RoboVinci fiddle around with our testicles while we’re under? Not these walnuts, baby.

And if it can perform all the surgeries then why would you as a physician author say that GPT will liberate the doctor to have more time with the patient. To do what, consent them for surgery?

Biases Are Biased

The thing about Generative AI is that it’s not coming up with something unique but using existing data to train on. The type of data determines its performance.

If our data is racist (check), sexist (check), or agist (check), then that’s the kind of decision matrix that will be created. “But, no, what we’ll do is train the engine on the biases that exist in medicine and then it won’t be biased.”

So who will be training this unbiasity into the system? I definitely don’t trust Michael Smith III do it! Are you kidding? Oh, it’ll be a bunch of engineers? So then how when the engine spits out bullshit, who decides to retrain the model? Trump? Mother Teresa? George Foreman?

Managing Chronic Diseases

Dr. Pearl, a plastic surgeon, the ex-CEO of KP, believes that with GPT we can collect patient data and tell them exactly when to exercise and what to eat to achieve their goal.

Listen, son, homie, amigo, compadre, that’s not how primary care works. Perhaps you can tell a patient after a booby job to wear a compression bra but that’s not the same as telling a diabetic patient to cut out their tortillas or zereshk polo.

Chronic diseases are tough because it takes more than technology to get a patient to buy in. You have to take a few years to gain their trust and then you’ll make tiny progress.

Having some doohickey buzz you that it’s time to work out? Really? My patient is giving her 2-year-old Coca-Cola through a sippy cup. I believe the author is not quite in tune with our current situation.

Democratize Health Information

Now, this I can get behind! Let’s give all the clinical knowledge we have to the patient and let them be on equal footing as us which should help narrow the knowledge gap.

But we’ve already done that. We’ve been in the information age since the 90s. My medical textbooks are available for any person to peruse. I have no monopoly on clinical knowledge; in fact, nobody does.

It’s just that no matter how much I know, some fucktard still think it’s a good idea to be a carnivore for life. I get it; you’re cutting out all the bad stuff you’d normally eat, and it’s worked great for you. The problem is that it may not work quite that well for the hyperabsorber of cholesterol who will then have a heart attack in 5 years.

“Health information” is a valuable phrase only when there is a source of truth. There isn’t.

Hospitals Aren’t Getting Paid Enough

I don’t think many patients, doctors, or even hospital CEOs will openly admit that hospitals aren’t getting paid enough. I don’t quite understand this rationale.

All think-tanks, economists, and health analysts are in agreement that hospitals and procedures are reimbursed at way too high of a rate by Medicare and Medicaid. And whatever CMS does is what commercial insurers do.

It’s not that specialists and hospitals shouldn’t get paid for what they do for patients; it’s just that they shouldn’t get paid for what they do to patients.

I would just delete this chapter of the book or just use GPT to replace the word hospital with “primary care doctor.”

User: ChatGPT, what’s the best way to improve patient outcome in the US? Should we spend more money on hospital medicine and procedures or primary care?

ChatGPT: Come on, man, you serious?!

Amazon is Not a Good Example

Nobody likes Amazon except the few million hyperconsumerist Americans. It takes business away from local shops, it kills competition, it forces cheap labor overseas, it creates a shit ton of waste, it encourages planned obsolescence, and it kills any social interaction one might have a with a shop owner.

Using Amazon as a great example and saying how healthcare could be just like Amazon is not good. I, Dr. Mo, don’t want to get my healthcare from any healthcare organization that considers Amazon a success.

Amazon’s customer service is superb. But, again, at a huge cost to many. Their shipping algorithms are unbelievable. But I really doubt it’s sustainable for the company or the world.

Technology in a Service Industry

My plumber can never be a better plumber because they have better whatchamacallits. My mechanic might do better diagnostics if they have a boroscope, but as a service provider, their main value is in great customer service and understanding my needs.

My problem with my electrician is that he’s flaky as fuck. He is brilliant when he shows up, but he won’t show up. My handyman is an awesome guy. As for his tools, nothing I don’t have myself in the closet.

Being a physician is offering service to another person who needs our service. They have options and they rely mostly on our expertise and professionalism. If we wanted to we could take them for a ride. Most of us want to be ethical but it’s really fucking hard to be ethical when you see the big medical groups getting away with such rich shit.

No robotic surgery or super-duper chemo will make our service work that much better; much less AI. Much. Less. A. I.

Regulators Will Regulate

If all of this wasn’t enough, it’s fair to say that the healthcare industry is among the most regulated industries of all. We would rather risk the patient’s life than have the patient and doctor have access to pertinent clinical data – that’s HIPAA in a nutshell.

The impression I got from the author is that LLM will liberate medicine, somehow fully uncontrolled like a butt-naked toddler freely running around with nobody worrying about accidents.

By the time regulators are done with LLM, or whatever future iterations will exist, it will be our EHR, at best.

The Thing About Pricing…

As a CEO would, he believes that because ChatGPT is priced at the consumer level, it, therefore, can rely on revenue streams and doesn’t have to be a freemium like Google. Does the author believe that this healthcare-liberating, world-changing service will be donated equitably to all humans?

Ignoring government regulations, monthly membership or not, the cost will permanently exclude many individuals, and from our US history, we know that those who need things the most will be priced out the most.

I could see a way where the government could guarantee a basic level but look at the different levels of GPT now. The performance and options are night and day.

Liberating Physician Time

So the argument Dr. Pearl makes is that with the help of Generative AI the physician will have more free time to spend with the patient. First, why would the physician need to spend more time with the patient if Gen AI is going to do most of what the doctor can do?

Second, when, ever, has any free up time not been used to increase output and put more strains on the patient and doctor alike?

In all fairness, Dr. Pearl did say that he wrote ChatGPT MD with ChatGPT as his coauthor. I’m not kidding. I think it wrote a whole chapter, according to the author – the human one, the one who won’t be replaced by ChatGPT even though ChatGPT will be able to do everything Dr. Pearl can do.

I See You

I see you LLM and Gen AI and ChatGPT … and I think for the most part it will be a great thing. I love how patients will have access to their own version of health information and can pick the brain of a tool that can digest large amounts of publicly available information.

Speaking of ICU, apparently there is an AI software that already exists that can predict which patients will crash in the ICU and alerts the physician to take a closer look and intervene early. This, I gotta say, is brilliant.

The thing is, we already have had that but it got taken away from us because profits > patients. We had a lot of nurses and a lot of doctors running the ICU. Now, we have a few interns, some NPs, a couple of burnt out RNs, and 1 Middle Eastern MD running a behemoth ICU.

Yes, if you fuck shit up then you’ll need to invent new tools to unfuck it. In that regard, this ICU thingamajig is brilliant. But, it’s just not a brilliant use of resources, says I.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.