NEW YORK, Oct. 24, 2023 /PRNewswire/ -- An artificial intelligence tool effectively detected distress in hospital workers' conversations with their therapists early in the pandemic, a new study shows, suggesting a potential new technology that screens for depression and anxiety.
As the coronavirus pandemic forced many hospitals to operate beyond capacity, medical workers faced overwhelming numbers of work shifts, limited rest, and increased risk of COVID-19 infection. At the same time, quarantine policies and fear of infecting family reduced their access to social support, with the combination increasing risk of medical errors and burnout.
As a result, virtual psychotherapy, which offers treatment access without leaving home, boomed during this period. The researchers took advantage of the related flood of digitalized session transcripts to identify common phrases used by patients and tie the terms to mental illness using a technique called natural language processing (NLP). In this method, a computer algorithm combs through data to pinpoint keywords that capture the meaning of a body of text. All identifying information about each patient was removed to protect their privacy.
Led by researchers at NYU Grossman School of Medicine, the analysis involved treatment transcripts from more than 800 physicians, nurses, and emergency medical staff. Also included were transcripts from 820 people also receiving psychotherapy during the first US wave of COVID-19 but not working in healthcare.
Study results revealed that among healthcare workers, those who spoke to their therapists specifically about working in a hospital unit, lack of sleep, or mood issues, were more likely to be diagnosed with anxiety and depression compared with healthcare workers who did not discuss these topics.
By contrast, such risks were not seen in workers from other fields who discussed the pandemic or their jobs (with terms such as "team," "manager," and "boss").
"Our findings show that those working on the hospital floor during the most intense moment of the pandemic faced unique challenges on top of their regular job-related stressors which put them at high risk for serious mental health concerns," said study lead author Matteo Malgaroli, PhD. Malgaroli is a research assistant professor in the Department of Psychiatry at NYU Langone Health.
Publishing online Oct. 24 in the Journal of Medical Internet Research AI, the report is the first application of NLP to identify markers of psychological distress in healthcare workers, according to Malgaroli.
For the study, the team collected data from men and women throughout the United States who sought teletherapy between March 2020 and July 2020. The researchers then used an NLP program to review session transcripts during the first three weeks of treatment.
Among the findings, healthcare workers shared four conversation themes around practicing medicine: virus-related fears, working on the hospital floor and ICU units, patients and masks, and healthcare roles. Meanwhile, therapy transcripts from those working in other fields only contained one topic about the pandemic and one related to their jobs.
Although the overall heightened risk for anxiety and depression among those who discussed working in a hospital was small (3.6%), the study authors say they expect the model to capture additional signs of distress as more data is added.
"These results suggest that natural language processing may one day become an effective screening tool for detecting and tracking anxiety and depression symptoms," said study senior author psychiatrist Naomi Simon, MD, a professor in the Department of Psychiatry at NYU Langone.
Also a vice chair in the Department of Psychiatry at NYU Langone, Simon notes that another potential future direction for using this approach could be to provide healthcare workers a way to confidentially record themselves answering brief questions. These responses could then be analyzed using NLP algorithms to calculate risk for mental health conditions, such as depression or anxiety disorders. This feedback would then confidentially be provided to the healthcare worker using the tool, who might be prompted to seek help.
The researchers caution that the report only captured the mental state of patients early in their treatment. As a result, the team next plans to explore how the discussion topics change over time as therapy progresses.
Funding for the study was provided by National Institutes of Health grants KL2TR001446, R44MH124334, and R01MH125179. Further research support was provided by the American Foundation for Suicide Prevention, the U.S. Department of Defense, and the Patient-Centered Outcomes Research Institute. Talkspace, a mobile psychotherapy company, provided the data for the analysis but was not otherwise involved in the study.
Simon consults for biotechnology companies Axovant Sciences and Genomind, as well as for pharmaceutical companies Springworks Therapeutics, Praxis Therapeutics, and Aptinyx, and the information services company Wolters Kluwer. She also has spousal equity in G1 Therapeutics, which develops cancer treatments. The terms and conditions of these arrangements are being managed in accordance with the policies of NYU Langone.
In addition to Malgaroli and Simon, another NYU investigator involved in the study was Emma Jennings, BS. Other study investigators included Emily Tseng, MS; and Tanzeem Choudhury, PhD, at Cornell University in Ithaca, NY; and Thomas Hull, PhD, at Talkspace in New York City.
Media Inquiries:
Shira Polan
Phone: 212-404-4279
[email protected]
SOURCE NYU Grossman School of Medicine and NYU Langone Health
WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM?
Newsrooms &
Influencers
Digital Media
Outlets
Journalists
Opted In
Share this article