Voice technology in 2019 is the website movement of the early 2000’s. Businesses and individuals are finding their vocal presence in the IoT space. Not all of its promises will be realized, but there is a lot that voice technology can do for healthcare.
I look at voice technology as a new way for us to interact with hardware and software. In the past it was the keyboard and then the mouse and now voice technology will become the new remote; a way for consumers to interact with devices.
Even if you can’t imagine a sea of people talking into their tech devices while walking or sitting at a cafe, it’s already happening. Once we get over the awkwardness of it we will be yelling instruction into our phones, much like how we get lost into our screens when standing around in public.
Voice Technology in Medicine
In my clinical practice I exchange more words with my patients than a touch or even eye contact. My conversations with my patients are at the core of my diagnoses and treatment recommendations. All of which creates voice data.
It’s 2019 and redundancy in technology is frustrating; why do I need to type out a summary of my conversation with the patient into an EMR when a software could do the transcribing and summarizing for me?
We have the technology for software and hardware to listen in on a patient visit and create medication orders, referrals, a SOAP note, follow-up instructions, and to display pertinent patient history on the screen to the clinician.
My voice can be a remote and the computer can infer what I want to have done. A nurse already does this when she follows around the doctor – she’ll know to start O2 on the patient, get the EKG, and administer a nebulizer before you yelled out the orders.
Anatomy of Voice Technology
Speech recognition software listens to your words or conversation and digitizes it in order to create content which it can analyze.
Especially when dealing with a medical specialty, voice recognition can become far more accurate compared to listening in on a random conversation.
Voice recognition software has the ability to learn based on its primary user and from repetition. It’s this learning which is still maturing. But with more data, with more hardware processing power, learning will grow exponentially.
It’s one thing to build the software which can effectively understand a clinical conversation in an optimal setting. It’s another thing for it to perform well when the patient is in distress and there are 3 people talking at the same time, such as in a very busy emergency room.
It’s not all about the software. Hardware has come a long way and will need to continue to evolve in order for sound technology to improve.
We need good hardware on popular devices (Smart Speakers, Cell Phones, Laptops) in order to process audio properly. We need it to be stored efficiently (compression) and safely (HIPAA) and for it to be further analyzed to see what else can be extracted.
The more microphones can operate and train the voice recognition software, the better the technology will become.
But will patients feel comfortable having their interactions with their health team recorded? Perhaps, if they believe that it will improve their health outcome.
If we can record the patient at home, at the pharmacy, in the hospital, in the clinic, at the SNF, etc., these are all opportunities to stitch together a realistic clinical picture of the patient.
Examples of Voice Technology in Healthcare
I’ll discuss a few examples of where voice technology in medicine is evolving. There are numerous companies which are focusing on this space. And if you’re a clinician who is interested in becoming a healthcare consultant in this space, the opportunities are definitely there.
1. Medication Reconciliation
If the patient reads off their medication bottles at home or if they say out loud what they are taking at what time, that’s all data that can be added to their pharmaceutical history.
Add to that the conversation they had with their pharmacist and there won’t be much of a need for medication reconciliation.
No more discharging a patient from the hospital with benazepril and losartan because nobody made it clear that the ACE-I will be stopped and replaced with the ARB.
2. End of Life Decisions
I’ve tried to complete End of Life forms with patients and they are tedious as shit. It’s not like I can have a normal conversation with the patient and then transcribe the details.
With voice technology in medicine we can talk to a patient over several visits about their end of life choices and the program can populate the form.
3. Creating a Referral
I have ordered the EKG and I am discussing the result with the patient. There is a prolonged PR interval for which I’ve ordered some blood work. I also tell the patient that I will be referring them to a cardiologist non-urgently.
Voice recognition can capture this order and pre-populate a referral template for me to sign. It can even capture the reason for my referral and add that into the referral.
4. Filtering Speech
I usually call the language line when I deal with a non-English speaking patient. But with voice technology we can have a conversation in different languages without an intermediary.
Or, a patient who had a stroke or has Parkinson’s can have their speech recognized and translated in order for me to better understand them. Not just me, I suppose, it can be a way for them to communicate with others.
5. Discharging a Patient
If a patient has a smart speaker at home (i.e. Amazon Echo), they can interact with a summary of their hospital discharge. They know when they can take a shower, what medication to take when, and when to schedule their first nurse visit.
And it’s a two-way street. Whatever they speak into their HIPAA compliant smart speaker will get transcribed. Anything of significance will be sent back to my nurse who can make sure that the patient isn’t doing something wrong.
6. Scanning of Voice Data
Of course, we don’t yet have the data processing, software, bandwidth, or legal green light to do what we need to with voice data. But we can still record it as long as all parties agree.
This voice data could be used in the future for training models and extracting important information. Third party billers can extract billing codes or we can extract past medical history and surgical history for creating a very complete clinical picture of the patient.
7. Treatment Summary
If the patient went to a SNF or got admitted for sepsis, I may not be able to get my hands on the pertinent information due to lack of interoperability. But if the patient had their phone mic on the whole time then the entire visit could be summarized and sent to me, their PCP.
I know that they got IV antibiotics, that they had a CT scan done and what the result was. I know that they were told by the discharging doctor that they are to start a new medication, etc.
8. Patient in Distress
Only 20% of 911 calls are real emergencies. And as many emergency room visits are true emergencies. If a patient’s voice can be analyzed then machine learning software can determine the likelihood of a true emergency.
This will save many unnecessary visits and save the patient a lot of money since ambulance transfers are very expensive.
9. Medication Errors
Pharmacists sometimes give out the wrong medication and doctors sometimes write the wrong scripts. Medication errors are a huge problem both inpatient and outpatient.
If a smart device is listening in on the conversation and if it has access to the patient’s clinical history, it’s quite easy for it to flag a certain medication.
10. SOAP Notes
If I’m talking to a patient on the phone during a telemedicine visit, that information should be more than adequate to create a SOAP note.
As the technology matures I will need to review the SOAP note to make sure that it’s accurate. But in the future this process should be automatic and a 3rd party reviewer can resolve any discrepancies.
11. Recognize Diseases
The speech pattern of a patient with depression or mania is unique enough that it can be recognized by software. But not just mental health, we can even use voice patterns to predict the chance of CAD and other medical problems.
Machine learning, again, is very important in helping us advance this disease prediction modeling.
12. Patient Recognition
The fingerprint of our voice is incredibly accurate. Checking a patient in by verifying their ID and phone number would no longer be needed. As soon as they open their mouth, even if they have a cold, the patient’s identity can be verified.
Their chief complaint can be populated and their check-in process will be completed. No need for a front-desk person. In fact, no need for anyone to be on the phone to help with booking and billing.
13. OR Errors
Some OR doctors are already accustomed to having scribes – so they they will vocalize everything they are doing out loud. If a gyn-onc is vocalizing that they are about to remove the left ovary, but they are supposed to remove the right, the smart speaker in the OR can flag the surgeon or ask for verification.
14. Pediatric Care
RSV, an asthma attack, pertussis, or tracheitis can all appear similarly. But they affect the voice in unique ways. Especially if there is a baseline for that particular patient, voice recognition software could make a highly accurate disease prediction.
Vocal biomarkers can be used for all sorts of medical purposes. We can use them to predict diseases or evaluate the progression of certain diseases.
2 replies on “Voice Technology in Medicine”
Thanks for the valuable information!
What was the voice recognition software that you were using in practice? Are you aware of any good software that helps with transcribing telemedicine audiovisual encounters?
We were using Dragon for our EMR – is that what you’re referring to?
I don’t know of any specific ones with which I have enough experience. But for myself, I dictate everything into Google Documents. It’s very effective and I can talk 2-3x as fast as I can type and I type pretty fast. Have you tried that before?