University of Oklahoma Joins National Artificial Intelligence Safety Consortium
NORMAN, Okla., Feb. 12, 2024 /PRNewswire/ -- The University of Oklahoma has joined a newly formed U.S. Artificial Intelligence Safety Institute Consortium led by the U.S. Department of Commerce's National Institute of Standards and Technology. This consortium aims to bring together the largest group of AI developers, users, researchers and affected groups worldwide to promote the creation of safe and trustworthy artificial intelligence.
"OU is a national leader in trustworthy AI for weather research and is at the forefront of AI/ML research in many fields, coordinated by our Data Institute for Societal Challenges. We're excited to apply OU's expertise to support the goals of this national consortium," said OU Vice President for Research and Partnerships Tomás Díaz de la Rubia.
OU's role in the consortium involves its Data Institute for Societal Challenges, led by director David Ebert, and the OU-led NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography, known as AI2ES and directed by Amy McGovern, the Lloyd G. and Joyce Austin Presidential Professor in the Gallogly College of Engineering and professor in the School of Meteorology.
The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography at OU focuses on creating trustworthy AI for a variety of high-impact weather phenomena and developing a modern workforce that can harness AI and ML for the benefit and safety of society.
OU's Data Institute for Societal Challenges will collaborate to address AI's complex challenges, ensuring positive outcomes nationally and globally. It also will help shape guidelines to advance industry standards in AI development and deployment.
"By joining forces with NIST and fellow consortium members, we are committed to advancing AI trustworthiness, fairness and safety measures that align with societal norms and values, ultimately empowering our communities and fostering a future where AI technologies drive positive societal impact," Ebert said.
Established to support the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence Executive Order of Oct. 30, 2023," AISIC will stimulate activities to spur innovation and advance trustworthy and responsible AI. Consortium participants will provide expertise in 20 different areas, including human-AI teaming and interaction, AI governance, AI system design and development, responsible AI and more. Learn more about the Artificial Intelligence Safety Institute Consortium and see the complete list of consortium participants from NIST.
SOURCE University of Oklahoma
WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM?
Newsrooms &
Influencers
Digital Media
Outlets
Journalists
Opted In
Share this article