UMD Joins National Consortium Dedicated to Improving AI Trustworthiness and Safety

The University of Maryland joined with more than 200 of the nation’s leading artificial intelligence (AI) stakeholders in a wide-ranging new federal effort to improve the trustworthiness and safety of AI systems.

The AI Safety Institute Consortium (AISIC), announced on February 8, 2024, by U.S. Department of Commerce Secretary Gina M. Raimondo, brings together AI creators and users, academics, government and industry researchers, and civil society organizations—all focused on improving the technical and societal benefits of AI, while simultaneously reducing its misuse and any related risks.

“The AI Safety Institute Consortium will allow us to work closely with the federal government and multiple other stakeholders to implement, sustain, and extend priority projects involving research, testing and guidance on AI safety,” said Gregory F. Ball, UMD’s vice president for research. “Given that AI tools and applications are growing at an unprecedented pace, transforming society and changing our way of life, the potential benefits and risks of AI require a much closer examination and a more complete understanding if we are going to truly reap the benefits of this technology.”

In her announcement, Raimondo said that aligning AI with the nation’s societal norms and values—and keeping the public safe—requires a broad, human-centered focus, specific policies, processes, and guardrails informed by community stakeholders across various levels of society, and a bold commitment from the public sector.

“The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” she said. “By working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”

At UMD, activity for the new consortium will be led by both the Institute for Trustworthy AI in Law & Society (TRAILS) and the Applied Research Laboratory for Intelligence and Security (ARLIS). TRAILS launched last year with a $20 million grant from the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST). ARLIS, a Department of Defense (DoD) University Affiliated Research Center within UMD, supports research and development, policy, academic outreach and workforce development in AI for the DoD and intelligence community.

TRAILS is expected to play a significant role in the new AI consortium, buoyed by strong research and scholarship that is already underway, as well as its advantageous location. Two of TRAILS’ four primary institutions—the University of Maryland and George Washington University—have strong connections to policymakers, government agencies and federal labs in the D.C. region. In addition to UMD and GW, TRAILS’ other primary institutions include Morgan State University in Baltimore and Cornell University in Ithaca, New York.

“We believe that safety and trustworthiness as they relate to AI are inextricably interwoven, particularly for communities whose voices have been shut out of the conversation for many of the AI systems in use today,” said Hal Daumé III, a UMD professor of computer science who is leading the TRAILS institute. “We expect our core mission—transforming the practice of AI from one that is technically centered to one that is driven by ethics, human rights and a diversity of input—will align well with activities in AISIC.”

To support this mission, TRAILS recently launched a $1.5 million seed grant program funding eight innovative projects designed to integrate a greater diversity of stakeholders into the AI development and governance lifecycle. TRAILS faculty also acted on a request from NIST in response to the Biden Administration’s executive order on AI safety announced last fall. The detailed responses—focused on socio-technical solutions—showcase the depth and breadth of TRAILS expertise in areas like values-centered design and participatory AI approaches, enabling trustworthy human-AI collaborations, and much more.

Part of TRAILS’ participation in AISIC involved the signing of a Cooperative Research & Development Agreement (CRADA) with the NIST, allowing for the seamless transfer of funding, personnel, services, equipment, intellectual property and resources between the federal government and the University of Maryland.

In addition to TRAILS, UMD has almost 200 researchers and a growing list of centers and programs spanning multiple disciplines developing tools for, exploring the safety and ethics of, and examining human interactions with AI, including:

About the College of Computer, Mathematical, and Natural Sciences

The College of Computer, Mathematical, and Natural Sciences at the University of Maryland educates more than 8,000 future scientific leaders in its undergraduate and graduate programs each year. The college's 10 departments and six interdisciplinary research centers foster scientific discovery with annual sponsored research funding exceeding $250 million.