A recipe for trustworthy artificial intelligence
Notre Dame researchers share a new model for developing artificial intelligence systems users can trust.
This week, a group of tech industry leaders issued an open letter warning of the looming threats posed by artificial intelligence (AI), comparing them to the risk of "pandemics and nuclear war."
The open letter is just one of many recent attempts to draw attention to situations in which AI cannot be trusted and to raise questions about AI’s potential unfair or harmful effects.
A group of researchers at the University of Notre Dame say it is important to ask a slightly different question: What would it look like to develop artificial intelligence we can trust?
Working alongside technology experts within the U.S. military as well as with researchers at Indiana University – Purdue University Indianapolis (IUPUI) and Indiana University, they are developing in a comprehensive, systematic approach to creating trustworthy AI.
Their project, called “Trusted AI,” has identified six widely shared values they call the “dimensions of Trusted AI.” The six dimensions are:
- Explainability - Can we explain how the AI arrives at inferences?
- Safety and robustness - Will the AI will work as expected—not just in the lab but in real, live contexts?
- Fairness - Can we ensure the AI will not reproduce patterns of bias and discrimination?
- Privacy - Are we confident that the data the AI uses will be held safely and confidentially?
- Environmental wellbeing - Can the AI be trained and developed with minimal negative environmental impact?
- Accountability and auditability - Can we identify who is responsible, and can we confirm that the AI is working as expected?
The key challenge in developing trustworthy AI, the researchers say, is to ensure that each of the dimensions informs every stage in the process, from the initial collection of data to the output, or “inference,” the AI provides. Only when there is an unbroken “chain of trust” can we be sure the end result is trustworthy.
The lead principal investigator behind the Trusted AI project is Christopher Sweet, the CRC’s associate director for cyberinfrastructure development.
Sweet, who has a concurrent appointment as an assistant research professor in the Department of Computer Science and Engineering, emphasizes that the process of developing Trusted AI is more a cycle than a one-and-done effort.
“It is an iterative process,” Sweet explains. “These technologies are constantly evolving—as are the data sets they depend on and the social contexts in which they are used. It is not about declaring victory. It is about showing that Trusted AI is an ongoing practice that requires that all stakeholders are involved and engaged.”
Charles Vardeman, a computational scientist at the CRC and a research assistant professor in the Department of Computer Science and Engineering, leads a sub-project of Trusted AI. Vardeman says the team is working to prevent harm by AI far beyond the technologies and applications that receive public attention.
“People are aware that AI powers things like Alexa and ChatGPT, but that is really just the tip of the iceberg,” he says. “Most people are interacting with AI regularly without knowing it. It is shaping their purchasing decisions online, and it is even helping to determine the medical care they receive.”
Adam Czajka, an assistant professor in the Department of Computer Science and Engineering, is leading a Trusted AI sub-project that focuses on ways humans and machines, when paired together, can arrive at highly trustworthy decisions. He and his colleagues have developed a way of training AI to recognize fake images by training it to mimic human perception.
Another Trusted AI subproject, led by CRC senior associate director and professor of the practice Paul Brenner, applies the Trusted AI recipe to create technology for the U.S. Navy.
Brenner, who is a faculty affiliate of iNDustry Labs, ND Energy, and the Wireless Institute, explains, “If a mission or weapon system fails, there is often more data available about that failure than any one person could read. New machine learning tools like natural language processing and knowledge graphs could help mine the data to identify the underlying cause of the failure.”
The obstacle, Brenner says, is that most commercially available machine learning tools are a “black box.” They create inferences on the basis of large sets of data. What they do not provide is an explanation about how or why they arrived at a particular inference.
Brenner’s team is developing a new approach for military applications that goes beyond the “black box.” In collaboration with the U.S. Navy installation near Crane, Indiana (Crane NSWC), Brenner and a group of ten Notre Dame undergraduate student researchers are building machine learning tools that are trained with a set of special, pre-labeled data for more accurate and more explainable outcomes.
Brenner emphasizes that in addition to the new tools and techniques his project develops, it will also include broader impacts that will continue to reverberate through the coming decades.
“We are looking forward to sharing what we learn with a broad group of students,” Brenner says. In addition to training the students directly involved in the research, the team will also educate younger students in the principles of Trusted AI through presentations and by welcoming 40 high school students to Notre Dame’s campus for the CRC’s Summer Scholars program.
“We are developing a new approach to AI that is urgently needed,” Brenner says, “and at the same time, we’re developing people—the future military officers, scholars, and tech industry leaders who will make trusted AI a reality.”
Trusted AI is part of the Scalable Asymmetric Lifecycle Engagement (SCALE) workforce development program funded by the Office of the Undersecretary of Defense for Research and Engineering Trusted & Assured Microelectronics program.
About the Center for Research Computing
The Center for Research Computing (CRC) at University of Notre Dame is an innovative and multidisciplinary research environment that supports collaboration to facilitate multidisciplinary discoveries through advanced computation, software engineering, artificial intelligence, and other digital research tools. The Center enhances the University’s innovative applications of cyberinfrastructure, provides support for interdisciplinary research and education, and conducts computational research. Learn more at crc.nd.edu.
About Notre Dame Research:
The University of Notre Dame is a private research and teaching university inspired by its Catholic mission. Located in South Bend, Indiana, its researchers are advancing human understanding through research, scholarship, education, and creative endeavor in order to be a repository for knowledge and a powerful means for doing good in the world. For more information, please see research.nd.edu or @UNDResearch.
Contact:
Brett Beasley / Writer and Editorial Program Manager
Notre Dame Research / University of Notre Dame
bbeasle1@nd.edu / +1 574-631-8183
research.nd.edu / @UNDResearch
Originally published by crc.nd.edu on June 02, 2023.
atLatest Research
- Literacy scholar Ernest Morrell elected to American Academy of Arts & SciencesErnest Morrell, the Coyle Professor of Literacy Education at the University of Notre Dame, has been elected to the American Academy of Arts and Sciences, one of the nation’s oldest learned societies and independent policy research centers. Morrell was one of the 250 members of the newest AAAS class announced today. Other notable names among the group include filmmaker George Clooney, Apple CEO Tim Cook, novelist Jhumpa Lahiri, and Pulitzer Prize-winning New York Times columnist and 1993 Notre Dame alumnus Carlos Lozada.
- Notre Dame faculty fight malaria resurgence in BangladeshBetween 2008 and 2020, districts across the country of Bangladesh saw a 93% reduction in malaria cases. Today, as the world reflects on the World…
- Anthropologist's research shows there’s no ‘one size fits all’ when it comes to addressing men’s health issues globallyAt a time when health resources are at a premium and need to be wisely allocated, health professionals must find points within men’s lives when it makes the most sense to intervene and advocate for preventive care for promoting better health outcomes. Life transitions such as marriage and fatherhood are often pivotal and crucial intervention points. But just like every man is different, health concerns across global communities differ as well. Research from the University of Notre Dame finds that not all life transitions produce the same health results, and not all men’s global health policies should look the same from one country to another.
- How postdoctoral researcher Seth Koren makes sense of the universe’s mysteries using physicsBillions of years ago, the very early universe was incredibly hot and dense — conditions could only be described as extreme. Today, physicists attempt to recreate these conditions using enormous accelerators, detectors and colliders to get particles up to the high energy that existed long ago.…
- From the army to anthropology: Postdoc’s path to peace-and-justice researchHelal Khan’s path to becoming an anthropologist who researches peace and justice has taken him all over the world. In his home country of Bangladesh, Khan was an army officer stationed along the Myanmar…
- Notre Dame–IBM Technology Ethics Lab Awards Nearly $1,000,000 to Build Collaborative Research Projects between Teams of Notre Dame Faculty and International ScholarsThe Notre Dame–IBM Technology Ethics Lab announced today that it has selected 17 projects to receive almost $1,000,000 in funding for 2024 through its third annual Call for Proposals (CFP). Each year, the Lab releases a CFP for the purpose…