AI Being Used to Help Diagnose Mental Health Issues; Privacy Concerns Real

4856
As AI is being employed to help diagnose mental health issues for individuals, privacy concerns rise in prominence. (GETTY IMAGES)

By John P. Desmond, AI Trends Editor

Mental health issues are believed to be experienced by one in five US adults and some 16 percent of the global population; the rates seem to be increasing. Meanwhile, many parts of the US have a shortage of healthcare professionals. Some $200 billion is spent annually on mental health services, experts estimate.

Given the constraints, it’s natural for researchers to explore whether AI technology can help extend the reach of health care professionals, and maybe help control some costs. Researchers are testing ways that AI can help screen, diagnose, and treat mental illness, according to an account in Forbes.

Researchers at the World Well-Being Project (WWBP) used an algorithm to analyze social media data from consenting users. They picked out linguistic cues that might predict depression from 1,200 people who agreed to provide Facebook status updates and their electronic medical records. The scientists analyzed over 500,000 Facebook status updates from people who had a history of depression and those who did not. The updates were collected from the years leading up to a diagnosis of depression and for a similar period for depression-free participants. The researchers modeled conversations on 200 topics, to determine a range of depression-associated language markers, which depict emotional and cognitive cues. The team examined how often people with depression used those markers, compared to the control group.

The researchers found that linguistic markers could predict depression up to three months before the person receives a formal diagnosis.

Startup Activity Around Mental Health Apps

Other companies are using AI to help with the mental health crisis. Quartet, with a platform that flags possible mental condition, can refer a users to a provider or therapy program. Ginger is a chat application used by employers to provide direct counseling services to employees. The CompanionMX system has an app that allows patients with depression, bipolar disorders, and other conditions to create an audio log where they can talk about how they are feeling; the AI system analyzes the recording and looks for changes in behavior. Bark is a parental control phone tracker app used to monitor major messaging and social media platforms. It looks for signs of cyberbullying, depression, suicidal thoughts, and sexting on a child’s phone.

AI Being Employed to Help Diagnose Dementia

AI is also beginning to be used to help diagnose dementia, a condition that if detected early, can be forestalled with appropriate medication.

Until recently, specialists used pen and paper test to diagnose dementia. However, AI tools that can analyze large amounts of health data and detect patterns in disease development are being implemented in many areas, enabling doctors and nurses to focus on patient care.

Engineers at the University of New South Wales in Sydney, Australia, are working on a smartphone app that incorporates speech-analysis technology to help diagnose dementia, according to an account in Medical Xpress. The app will use machine learning technology to look at paralinguistic features of a person’s speech—pitch, volume, intonation and prosody—as well as memory recall.

“The tool will essentially replace current subjective, time-consuming procedures that have limited diagnostic accuracy,” stated Dr. Beena Ahmed from UNSW’s School of Electrical Engineering and Telecommunications. Dr. Ahmed presented a paper on her work at the IEEE EMB Strategic Conference on Healthcare Innovations in the US.

Cognetivity Neurosciences, based in London, is working on an AI-powered test designed to detect cognitive decline, which could potentially identify signs of dementia 15 years before a formal diagnosis, according to an account in Page and Page.

The test can be conducted in five minutes using an iPad. A series of images are shown to the user, each appearing for a fraction of a second within a sequence of patterns. The user determines whether they see an animal in each image by tapping to the left or right of the screen. AI analyzes the test data, paying attention to detail, speed, and accuracy. It generates a score using a traffic light system that can guide healthcare professionals in next steps.

Animals were chosen because the human brain is conditioned to react more quickly to images of animals. “An animal is an animal in every culture—it’s just your speed of analyzing the information. When memory comes into play, learning by the patient is a factor. By isolating memory from this, we’ve taken out the learning bias,” stated Dr. Sina Habibi, CEO of Cognetivity Neurosciences.

The company is now working on regulatory approvals needed in the UK. Dr. Tom Sawyer, COO of Cognetivity, stated, “The next generation AI models can then be trained to detect specific conditions, such as Alzheimer’s, which would then go through the regulatory approval process. What we believe is most important currently, is to make available a highly usable test that can make a significant difference to the current situation of shockingly low detection and diagnosis rates, and money wasted though incorrect referrals.”

Data Security and Privacy are Prime Concerns

Anyone entering medical data into a smartphone app needs to be concerned with data security and privacy. Many app developers sell users’ data, including their name, medical status, and sex, to third parties such as Facebook and Google, researchers warn. And most consumers are unaware that their data can be used against them, according to an account in Spectrum.

That the big companies will gain access to the data is a fait accompli. For example, Google parent company Alphabet recently announced that it will buy wearable fitness tracker company FitBit. That gives Google access to lots of individual health data.

Cases documenting abuses are mounting. The US government for example has investigated Facebook for allowing housing ads to filter out individuals based on several categories protected under the Fair Housing Act, including disability status.

“There can also be usages to exclude certain populations, including people living with autism, from benefits like insurance,” stated Nir Eyal, professor of bioethics at Rutgers University in New Jersey.

Developers of commercial apps may not always be fully transparent about how a user’s health data will be collected, stored, and used. A study published in March in The BMJ showed that 19 of the 24 most popular apps in the Google Play marketplace transmitted user data to at least one third-party recipient. The Medsmart Meds & Pill Reminder app, for example, sent user data to four different companies.

Another study, published in April in JAMA Network Open, found that 33 of 36 top-ranked depression and smoking cessation apps the investigators looked at sent user data to a third party. Of them, 29 shared the data with Google, Facebook or both, and 12 of those did not disclose that use to the consumer.

John Torous, director of digital psychiatry, Beth Israel Deaconess Medical Center in Boston

Moodpath, the most popular app for depression on Apple’s app store, shares user data with Facebook and Google. The app’s developer discloses this fact, but the disclosures are buried in the seventh section of the app’s privacy policy.

Even when apps disclose their policies, the risks involved are not always clear to consumers, stated John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center in Boston, and co-lead investigator on the April study. “It is clear that most privacy policies are nearly impossible to read and understand,” Torous stated.

Read the source articles in Forbes, Medicalxpress, Page and Page and Spectrum.