ND Experts on the opportunities, concerns and impacts of AI
In testimony before the Senate Judiciary Subcommittee on Privacy, Technology and the Law on Tuesday (May 16), OpenAI CEO Sam Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”
The hearing, titled “Oversight of AI: Rules for Artificial Intelligence,” also featured IBM Vice President and Chief Privacy and Trust Officer Christina Montgomery — who is on the steering committee for the Notre Dame-IBM Technology Ethics Lab — and Gary Marcus, founder of Geometric Intelligence, a machine learning company.
In response to fast-paced and dramatic changes to the AI landscape, the federal government has also recently announced a $140 million investment to create seven new research institutes, and the White House is expected to issue guidance in the next few months on how federal agencies can use AI tools.
In light of these developments, University of Notre Dame experts reflect on the opportunities, concerns and impacts of AI on different fields — including entertainment and media, the arts, politics, the labor market, education and business. Several of these faculty are affiliates of the interdisciplinary Notre Dame Technology Ethics Center, which, among other initiatives, offers the University's undergraduate minor in tech ethics.
John Behrens: First step in addressing AI concerns is education
“Artificial intelligence is a type of software, and the more people treat it that way — rather than as some robotic being — the better off we will be,” said John Behrens, director of technology initiatives for Notre Dame’s College of Arts and Letters. “But we need to support education at all levels to get there. The questions society is facing because of AI are not only ethical but involve all the liberal arts: What are the economic impacts? What are the psychological impacts? What questions does this human-like fluency in language raise for issues of philosophy and theology? Notre Dame has a unique opportunity to bring to bear the full range of the liberal arts to help society tackle these issues.”
Nicholas Berente: Investing in research of AI's use, impacts and required guardrails is key
“Depending on how you define it, AI has been around for more than a half century,” said Nicholas Berente, professor of information technology, analytics and operations. “What is new — and what has people concerned — is the rather unbelievably rapid pace of recent advancements in AI. As soon as we get used to one set of capabilities, there is a new generation that surpasses them dramatically. The recent wave of generative chat technologies, such as ChatGPT by OpenAI and Bard by Google, have caught on like wildfire. People immediately found uses, for good and for bad, and the power of these generative tools has terrified many and has led to some curious decisions.”
Ahmed Abbasi: AI's major challenge is striking balance between innovation, precaution
“When it comes to technology, we know that regulation and governance often lag behind,” said Ahmed Abbasi, the Joe and Jane Giovanini Professor of IT, Analytics and Operations. “Case in point, the internet, mobile, social media, cryptocurrencies, etc. NIST recently came out with their AI risk management framework. The key components of the framework are to create a culture of governance, and then to map, measure and manage – all with the goal of supporting responsible AI tenets such as fairness, privacy and transparency."
Christine Becker: AI’s impact on the Hollywood labor market
“While AI concerns alone probably won’t sustain long-term labor strife if producers give significant ground on monetary issues, it is possible the only force preventing the dramatic disempowerment of creative labor by artificial intelligence in just a few short years will be the strength and unity of organized labor this summer,” said Christine Becker, an associate professor of film, television and theater.
Sarah Edmands Martin: Can AI expand our understanding of creativity?
“The recent accessibility of AI raises a number of philosophical, ethical and political questions for the art world: For example, does AI hasten the automation of the creative industries, while exploiting vast archives of human labor?” said Sarah Edmands Martin, an assistant professor of art, art history and design. “After all, AI programs like Midjourney and Dall-E use human-generated artworks as training data without compensation, which many people — fairly — fear disempowers or displaces workers in creative industries.”
Lisa Schirch: AI has ability to undermine or potential to unite
“AI has the potential to both help humans make better decisions together, or to undermine our ability to solve problems,” said Lisa Schirch, the Richard G. Starmann Sr. Endowed Chair at the Kroc Institute for International Peace Studies and professor of the practice in the Keough School of Global Affairs.
Yongsuk Lee: AI can stunt or complement the labor market
“Much of the attention and research have been focused on the development of these new large language models (LLMs), but relatively less research has been done on how we should use LLMs, the societal consequences of these models and potential policy recommendations,” said Yong Suk Lee, an assistant professor in the Keough School of Global Affairs. "For now, to be competitive in the labor market, workers may need to be proficient in using and prompting LLMs and, at the same time, be familiar with their limitations.”
Tim Weninger: How will AI affect public trust?
“I foresee an increasingly skeptical population and, with that, a consolidation of trust in information spaces," said Tim Weninger, the Frank M. Freimann Associate Professor of Engineering and director of graduate studies in computer science and engineering. "A smaller and smaller set of organizations will have and hold public trust, and it will become increasingly more difficult for new organizations to build trust among a consumer base.”
Panos Antsaklis: AI and unintended consequences of inaccuracies, limitations
"These are serious concerns that media companies and government agencies need to address as soon as possible," said Panos Antsaklis, the H. Clifford and Evelyn A. Brosey Professor in the Department of Electrical Engineering. "Generating inaccurate information may not always be malicious, but it can be generated because of a lack of understanding the limitations of these software programs.”
Latest Faculty & Staff
- There’s no ‘one size fits all’ when it comes to addressing men’s health issues globallyAt a time when health resources are at a premium and need to be wisely allocated, health professionals must find points within men’s lives when it makes the most sense to intervene and advocate for preventive care for promoting better health outcomes. Life transitions such as marriage and fatherhood are often pivotal and crucial intervention points. But just like every man is different, health concerns across global communities differ as well. Research from the University of Notre Dame finds that not all life transitions produce the same health results, and not all men’s global health policies should look the same from one country to another.
- Three Notre Dame faculty named 2024 Guggenheim FellowsBarbara Montero, a professor of philosophy; Gretchen Reydams-Schils, a professor in the Program of Liberal Studies; and Roy Scranton, an associate professor of English and director of the Creative Writing Program and the Environmental Humanities Initiative, are among the 188 scholars, scientists and artists chosen from approximately 3,000 applicants for the fellowship. The Guggenheim Foundation awards these fellowships to outstanding scholars in order to add to the educational, literary, artistic and scientific power of the country.
- Essays on democracy draw attention to critical threats, explore safeguards ahead of Jan. 6Shortly after Jan. 6, 2021, when a mob stormed the U.S. Capitol building, Notre Dame’s Rooney Center for the Study of American Democracy established the January 6th, 2025, Project, which includes 10 Notre Dame faculty who are preeminent scholars of democracy. In an effort to understand the social, political, psychological and demographic factors that led to that troublesome day, the group created a collection of 14 essays aimed at drawing attention to the vulnerabilities in our democratic system and the threats building against it, hoping to create consensus on ways to remedy both problems.
- Carter Snead testifies before US Senate Judiciary CommitteeO. Carter Snead, the Charles E. Rice Professor of Law and director of the de Nicola Center for Ethics and Culture at the University of Notre Dame, offered expert testimony on Wednesday (March 20) before the U.S. Senate Committee on the Judiciary on the current legal landscape following the landmark Supreme Court decision in Dobbs v. Jackson Women’s Health Organization.
- In memoriam: Ronald Weber, American studies professor emeritusRonald Weber, a professor emeritus of American studies at the University of Notre Dame, died March 12 in Valparaiso, Indiana. He was 89.
- Political scientist shares China-Global South expertise with policymakersFor more than a decade, China has invested heavily in the economic development of countries collectively known as the Global South. More recently, China has demonstrated that its ambitions are growing beyond the economic realm and extending into the geopolitical sphere. This shift carries implications not only for the developing countries that are the beneficiaries of China’s investment, but also for the United States and other developed democracies, says Joshua Eisenman, associate professor of politics in the Keough School of Global Affairs.