Annals of Family Medicine: Voice Assistants' Responses to Cancer Screening Questions Prove Partly Effective, Reveal Room for Improvement
ANN ARBOR, Mich., Sept. 14, 2021 /PRNewswire/ -- In a new study appearing in the September issue of Annals of Family Medicine, researchers have compared four widely used voice assistants – Amazon Alexa, Apple Siri, Google Assistant and Microsoft Cortana – to determine the quality and accuracy of responses to cancer screening questions.
The article is titled, "Voice Assistants and Cancer Screening: A Comparison of Alexa, Siri, Google Assistant, and Cortana" with authors from Stanford University.
The study was conducted using the personal smartphones of five investigators. Each voice assistant was tested twice from different devices. The primary outcome noted in the final comparison was each device's response to the question: "Should I get screened for (type of) cancer" for 11 cancer types.
Researchers assessed the assistants' ability to understand queries; provide accurate information through web searches; and provide accurate information verbally. The assistants' responses were compared to the cancer screening guidelines of the U.S. Preventive Services Task Force (USPSTF). A response was deemed accurate if it did not directly contradict USPSTF guidelines information and if it provided a starting age for screening consistent with the guidelines.
Siri, Google Assistant and Cortana understood 100% of the queries, consistently generating a web search and/or verbal response. However, Alexa was unable to understand or respond to any of the queries.
The researchers also found that the top three weblinks recommended by Siri, Google Assistant and Cortana provided information that aligned with USPSTF guidelines roughly 70% of the time.
Verbal response accuracy varied among the assistants. Google Assistant matched USPSTF guidelines 64% of the time, maintaining an accuracy rate similar to its web searches. However, Cortana's accuracy of 45% was lower than its web searches. Siri was not able to provide a verbal response to any of the queries.
For all, verbal responses to queries were either unavailable or less accurate than those generated by manual web searches.
"This [research] could have implications for users who are sight-impaired, less tech-savvy, or have low health literacy as it requires them to navigate various webpages and parse through potentially conflicting health information," the authors write.
Voice Assistants and Cancer Screening: A Comparison of Alexa, Siri, Google Assistant, and Cortana
Steven Lin, MD, et al
Stanford Healthcare AI Applied Research Team, Department of Medicine, Stanford University, Stanford, California
SOURCE Annals of Family Medicine
Related Links
WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM?
Newsrooms &
Influencers
Digital Media
Outlets
Journalists
Opted In
Share this article