Don’t hand AI the scalpel just yet

Conversational AI is all the rage these days. It’s everywhere. Including in healthcare. To en extend. The hype is certainly trying to break through to the sector with talk of robotic doctors, chatbot therapists, and AI making life-altering decisions.

In its present state, the view seems to be it is like combining Dr. Strange with Siri – except there’s nothing “strange” here, it’s just wildly irrational, potentially downright dangerous. Conventional human wisdom is, you really wouldn’t want to bet your life on it. AI seems to agree. ChatGPT sure does. Just look at it’s own – fully justified BTW – disclaimer.

Anyways, let’s examine the phenomenon and look at some of the areas, where analysis leads to the conclusion that conversational AI currently looks more like a misfit than an actual fit:

First, let’s talk about data privacy. Here is some news for you (well, maybe it isn’t new, but you get the drift): Conversational AI systems have the self-control of a golden retriever at a buffet. They store, process, and often share data like there’s no tomorrow. In healthcare, where patient confidentiality is absolutely key, this is like inviting a bull into a china shop and being surprised when the fine china ends up in ruins.

Second, I present to you the Achilles’ heel of AI – accuracy. Let’s say you’re in a room, and an AI doc tells you that you need your appendix removed ASAP. You’d want to ask for a second opinion from a human, right? The sheer unpredictability and ever-changing nature of human physiology makes Conversational AI little more than an automated quack for making health decisions. At least in its present state.

Then there’s the empathy deficit. Remember this: Machines. Don’t. Have. Feelings. Even when you consider those who think machines are actually more likeable than actual doctors, picture this: an AI therapist is helping someone through depression. The patient’s on the edge, the conversation is reaching a critical junction, and the AI’s response is “I understand you’re sad, would you like me to play a cheerful song?”.

WFT? Human connection, empathy, and understanding are irreplaceable, and anyone telling you otherwise is probably trying to sell you a robot.

Lastly, let’s talk about a little thing called bias. The algorithmic Godfathers in Silicon Valley would have you believe that AI is impartial. Nothing could be further from the truth. All these systems are trained on biased datasets. The risk? Misdiagnosis, inappropriate treatments, and a a long long list of other ailments to an already broken system.

So, are we ready to hand over our stethoscopes to chatbots? Nope. Not yet. By a longshot.

I’m not suggesting technology doesn’t have a role in healthcare, but just that it remains absolutely key to keep the human in the loop. Big Tech and its army of shiny conversational AI toys need to be on a leash, not leading the charge.

(Photo by National Cancer Institute on Unsplash)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.