The Use of AI in Mental Health Care
Day by day, AI and machine learning play a bigger and bigger role in our world. Be it self-driving cars, virtual assistants, or algorithms that decide whether an applicant looking to lease an apartment should be approved or not, the application of AI and machine learning is integral to the way the world works today.
Scientists are now working to use artificial intelligence to help diagnose and treat mental illness. Mental illness affects a big proportion of the population and poses a real threat to societal well-being. Being able to accurately diagnose and treat mental illness is extremely important, but as it currently stands, the ways in which we do this are sometimes inconsistent and subjective. Currently, mental health diagnoses are given by psychiatric professionals who look at a patient’s symptoms and then refer to the Diagnostic and Statistical Manual of Mental Disorders, which dictates the criteria for different disorders based on the display of associated symptoms.
“Psychiatry is seeking to measure the mind, which is not quite the same thing as the brain. So it relies on having people quantify how they feel. While clinical diagnostic surveys are actually quite accurate, they are prone to some inaccuracies. What one person considers a 3 on a 1 to 10 sadness scale, for example, could be another person’s seven and yet another’s ten — and none of them are wrong. The language for accurately measuring pain just isn’t consistent…Mental health disorders are also amorphous things, with overlapping symptoms among different diagnoses.” (Zarley)
Scientists believe that AI might provide us with a better and more data-driven model for giving accurate diagnoses, discovering underlying physical causes, and most importantly, determining the most effective treatment for a specific mental disorder. Researchers at Virginia Tech’s Fralin Biomedical Research Institute compile patient information including survey responses, neuro-imaging, behavioral data, speech data from interviews, and psychological assessments, and then feed all of this data into a machine learning algorithm. Scientists are planning to add saliva and blood samples into the algorithm as well to give a more complete picture.
“This science was not possible until very recently. The algorithms [used] are decades old: they combine with fMRI imaging, which was invented in 1990. But the computing power required to make them useful is finally available now, as is a newer willingness to combine scientific disciplines in novel ways for novel problems.” (Zarley)
Through fMRI (functional MRI) scans, the algorithm has access to neurological information, which teaches the machine which parts of the brain light up and respond to certain stimuli. This would allow scientists to detect patterns and explore how people respond to social situations and analyze how “healthy” brains compare to the brains of people with mental health disorders. The machine would also be able to determine where a certain treatment is effective, “perhaps providing a template for preventative mental health treatment through exercises one can do to rewire the brain” (Zarley).
Pearl Chiu, a clinically-trained psychologist working at Fralin hopes that this method will allow for the identification of patterns that clinicians do not pick up on or cannot access through neuro-imaging. The goal is that in “making mental health disorders more physical…[we can] help destigmatize them as well (Zarley).
“If it can be diagnosed as objectively and corporeally as heart disease, would depression or bipolar disorder or schizophrenia carry the same shame?” (Zarley)
While AI and machine learning is incredibly useful and would be a huge benefit in the realm of mental health care, scientists do have to take account human biases that might affect their model. As B. David Zarley writes in his The Verge article, “A diagnosis found by a machine learning pattern would mean little if the bias is in the programming.” It is easy to think of algorithms and machine learning as unbiased and impartial, but we have to keep in mind that the data is collected and analyzed by human beings and “even the tools used to collect that data have shortfalls that can bias the data as well” (Zarley).