PITTSBURGH, Feb. 15, 2024 /PRNewswire/ -- Generative Artificial Intelligence (GenAI) allows users to create realistic images, videos, audio, and text quickly and cheaply—capabilities that can be useful in many contexts. But during elections, GenAI can be misused to manipulate and deceive voters at an unprecedented magnitude and scale. Researchers at Carnegie Mellon University have created a new guide to educate voters about how the technology may be used by unethical parties, particularly foreign adversaries, to manipulate and misinform American voters in ways they may not recognize.
"The use of GenAI to fabricate compelling information poses a real threat to our democracy," says Hoda Heidari, Leader of the Responsible AI Initiative at CMU's Block Center for Technology and Society, who co-authored the resource. "Our guide serves as a warning to the public and offers concrete steps to take action. At Carnegie Mellon University, we have led the creation of AI as a powerful technology, so we take our responsibility seriously to educate the public on both the capabilities and the risks of GenAI—especially if it can impact our fundamental rights."
The guide provides information on how voters can support the integrity of the democratic process, including pausing to examine claims they encounter on social media and investigating sources of information. "The democratic process relies on debate among the voters who may have differing viewpoints, yet are informed by facts. GenAI makes it easy to derail this process through the creation of fictions, fictitious voters, and by making it appear that a real person is saying things that they never said,'' notes Kathleen M. Carley, Director of Carnegie Mellon's Center for Informed Democracy and Social-cybersecurity (IDeaS) who co-authored the guide. In addition, the guide addresses potential harms of GenAI, including suppressing votes, disseminating propaganda, and sowing doubt and uncertainty around the democratic process and its integrity.
"These harms can ultimately sway the results of elections, giving outsized influence to those who use GenAI to promote their agenda," according to Alex John London, Director of Carnegie Mellon's Center for Ethics and Policy and Chief Ethicist at the Block Center, as well as a faculty member in CMU's Department of Philosophy, who co-authored the guide.
With no strong guardrails around using GenAI in political campaigns, the authors encourage voters to contact their legislators, ask them to support stronger AI regulation, and ask questions about their candidate's use of GenAI in their campaigns.
Work on the guide was conducted at the Responsible AI Initiative, funded by the Block Center for Technology and Society and the School of Computer Science at Carnegie Mellon University, and in partnership with the Center for IDeaS. The guide can be found at https://www.cmu.edu/block-center/responsible-ai/genai-voterguide/genai-voter-guide.html.
About the Block Center for Technology and Society
Artificial intelligence, robotics, machine learning, and advanced manufacturing impact society, the economy, and our daily lives profoundly. While many of these impacts are beneficial, laborers in some industries are being displaced due to automation, algorithms drive decision-making in powerful and often unseen ways, and new platforms and networks have fundamentally changed how people engage with, and contribute to, their world. Established in 2019, The Block Center focuses on how emerging technologies will alter the future of work, how AI and analytics can be harnessed for social good, and how innovation in these spaces can be more inclusive and generate targeted, relevant solutions that reduce inequality and improve quality of life for all.
For more information, please visit https://www.cmu.edu/block-center/about-us/index.html
About the Center for Informed Democracy and Social-cybersecurity (IDeaS)
The growth of social media has changed how we communicate, play, work and interact. While these platforms have improved our lives, they have also provided vehicles for sharing and amplifying disinformation, hate speech, and extremism. In this social-cyber environment, information warfare and propaganda have flourished. As our lives move online, we're increasingly challenged to look beyond these messages to remain informed, thoughtful citizens who can engage civilly with each other without being subjected to undue influence. The Center for IDeaS at Carnegie Mellon University aims to enhance social-cybersecurity to preserve and support an informed democratic society. The Center for IDeaS focuses on detecting, understanding, predicting, and mitigating the impact of online harms – disinformation, hate, and extremism. For more information, please visit https://www.cmu.edu/ideas-social-cybersecurity/.
SOURCE Carnegie Mellon University
WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM?
Newsrooms &
Influencers
Digital Media
Outlets
Journalists
Opted In
Share this article