By AI Trends Staff
Consumers are skeptical about trusting advice on healthcare coming from AI systems, according to a survey of 2,000 adults by Harris Poll for Invoca, a provider of healthcare consumer engagement services.
As reported in DigitalCommerce360, the survey found consumers are fine with recommendations from AI applications for travel and restaurant applications, but not so much for healthcare.
The poll found that nearly half (49%) of consumers would trust AI-generated advice for retail, and 38% would trust AI-generated advice for hospitality, such as checking or comparing flight or hotel options or restaurant recommendations. But just 20% would trust AI-based advice for healthcare.
“Many consumers strongly prefer human interaction to complete certain types of transactions,” says Julia Stead, VP of marketing at Invoca . “While AI has been a real game changer for the ‘back office’ of and the ways to run businesses more efficiently, this study suggests that it still lags on the front end of business—the consumer interactions.”
The survey states, “Applying AI to the retail experience makes sense because it’s already a fairly frictionless purchase process, and the price to pay if something were to go wrong is minimal. However, there’s clearly some consumer hesitation in industry verticals like healthcare when the stakes are likely higher.”
Age also plays a role in which consumer groups are amenable to AI suggestions, advice and recommendations for healthcare—and which ones aren’t. “At 80% younger consumers are more likely to be trusting of AI advice compared with 62% for consumers 35 and older, and 22% for consumers age 65 and above,” the survey says.
Regardless of age, consumers as patients also still like personal contact over any type of technology—except the phone. The survey found that 32% of consumers prefer to complete a transaction over the phone, compared to 30% who prefer in-person, 25% online, 6% via a brand’s mobile app and 5% via AI such as a chatbot.
The survey-writers suggested, “It’s paramount that healthcare marketers give patients the opportunity for human connection, and use that interaction to further personalize the patient experience.”
Doctor Sees Potential for AI to Make Healthcare More Human Again
The role of AI in healthcare is explored in a new book, “Deep Medicine, How AI Can Make Healthcare Human Again,” by Dr, Eric Topol, an American cardiologist and geneticist, and the founder of the Scripps Research Translational Institute in California. Here are excerpts from a recent interview published in The Guardian:
What’s the most promising medical application for artificial intelligence?
In the short term, taking images and having far superior accuracy and speed – not that it would supplant a doctor, but rather that it would be a first pass, an initial screen with oversight by a doctor. So whether it is a medical scan or a pathology slide or a skin lesion or a colon polyp – that is the short-term story.
You talk about a future where people are constantly having parameters monitored – how promising is that?
You’re ahead of the curve there in the UK. If you think you might have a urinary tract infection, you can go to the pharmacy, get an AI kit that accurately diagnoses your UTI and get an antibiotic – and you never have to see a doctor. You can get an Apple Watch that will detect your heart rate, and when something is off the track it will send you an alert to take your cardiogram.
Is there a danger that this will mean more people become part of the “worried well”?
It is even worse now because people do a Google search, then think they have a disease and are going to die. At least this is your data so it has a better chance of being meaningful.
It is not for everyone. But even if half the people are into this, it is a major decompression on what doctors are doing. It’s not for life-threatening matters, such as a diagnosis of cancer or a new diagnosis of heart disease. It’s for the more common problems – and for most of these, if people want, there is going to be AI diagnosis without a doctor.
If you had an AI GP – it could listen and respond to patients’ descriptions of their symptoms but would it be able to physically examine them?
I don’t think that you could simulate a real examination. But you could get select parts done – for example, there have been recent AI studies of children with a cough, and just by the AI interpretation of that sound, you could accurately diagnose the type of lung problem that it is.
Smartphones can be used as imaging devices with ultrasound, so someday there could be an inexpensive ultrasound probe. A person could image a part of their body, send that image to be AI-interpreted, and then discuss it with a doctor.
One of the big ones is eyegrams, of the retina. You will be able to take a picture of your retina, and find out if your blood pressure is well controlled, if your diabetes is well controlled, if you have the beginnings of diabetic retinopathy or macular degeneration – that is an exciting area for patients who are at risk.
What are the biggest technical and practical obstacles to using AI in healthcare?
Well, there are plenty, a long list – privacy, security, the biases of the algorithms, inequities – and making them worse because AI in healthcare is catering only to those who can afford it.
You talk about how AI might be able to spot people who have, or are at risk of developing, mental health problems from analysis of social media messages. How would this work and how do you prevent people’s mental health being assessed without their permission?
I wasn’t suggesting social media be the only window into a person’s state of mind. Today mental health can be objectively defined, whereas in the past it was highly subjective. We are talking about speech pattern, tone, breathing pattern – when people sigh a lot, it denotes depression – physical activity, how much people move around, how much they communicate.
And then it goes on to facial recognition, social media posts, and other vital signs such as heart rate and heart rhythm, so the collection of all these objective metrics can be used to track a person’s mood state – and in people who are depressed, it can help show what is working to get them out of that state, and help in predicting the risk of suicide.
Objective methods are doing better than psychologists or psychiatrists in predicting who is at risk, so I think there is a lot of promise for mental health and AI.
If AI gets a diagnosis or treatment badly wrong, who gets sued? The author of the software or the doctor or hospital that provides it?
There aren’t any precedents yet. When you sign up with an app you are waiving all rights to legal recourse. People never read the terms and conditions of course. So the company could still be liable because there isn’t any real consent. For the doctors involved, it depends on where that interaction is. What we do know is that there is a horrible problem with medical errors today. So if we can clean that up and make them far fewer, that’s moving in the right direction.
You were commissioned by Jeremy Hunt in 2018 to carry out a review of how the NHS workforce will need to change “to deliver a digital future”. What was the biggest change you recommended?
I think the biggest change was to recommend we accelerate the incorporation of AI to give the gift of time – to get back the patient-doctor relationship that we all were a part of 30, 40-plus years ago. There is a new, unprecedented opportunity to seize this and restore the care in healthcare that has been largely lost.