Skip to main content
Graduate Student homeNews home
Story
7 of 19

AI's next frontier: Bridging the gap between knowledge work and skilled labor

Corey Angst’s research reveals how manufacturing workers might implement AI tools to improve efficiency and quality control.

Generative artificial intelligence, which uses patterns in existing data to create new content, is everywhere, or at least it would seem that way. Its development and impact dominate the news. Banners promoting its adoption sweep across search engines, email platforms and word-processing software, if it hasn’t been implemented by these programs already. Social media users and teachers alike lament the struggle to identify whether text was written by a real person or AI. Even major news organizations have copped to using AI to write entire articles.

Yet the ubiquity of AI is still limited to certain areas of industry, particularly office jobs. Chatbots, or large language models (LLMs) such as ChatGPT and Gemini, analyze and predict text, which is why AI is often associated with language processing. Deeper learning multimodal models (MLLMs) further incorporate images, audio and video, but so far, this hasn’t meant that AI has spread to all areas where humans analyze images and audio. At least, not yet.

“Just about everybody has thought about AI from the perspective of what we would call white-collar work or knowledge work,” said Corey Angst, the Jack and Joan McGraw Family Collegiate Professor of IT, Analytics, and Operations (ITAO) and incoming department chair at the University of Notre Dame’s Mendoza College of Business. “We’re trying to shift that focus. We’re asking, how can we use AI on the shop floor in manufacturing?”

Angst has spent the bulk of his career exploring how IT innovations impact society. His earliest area of interest was in health care IT, but his research has expanded to include other industries as the development of artificial intelligence has escalated in recent years. Thanks to Notre Dame’s location in a region ripe with small- and medium-sized manufacturing firms, he has benefited from meeting directly with key leaders in the industry to learn about their interests and ability to adopt AI technology.

“While AI in general is very good at prediction, it's not as good at judgment,” Angst said, explaining why AI hasn’t become commonplace on the shop floor. “It's getting better at judgment, but when people think about AI in a manufacturing setting, they immediately go to the use of robots and cobots [collaborative robots], but they don't think of what could assist with decision making in collaboration with a human being.”

In 2025, Angst and a team of multidisciplinary researchers published the paper, “Do multimodal large language models understand welding?” in Information Fusion. The study explores the opportunities and limitations of using AI in manufacturing settings to assist workers with decision making.

Headshot of Corey Angst.
Corey Angst (Photo by Barbara Johnston/University of Notre Dame)

The study received funding from the U.S. National Science Foundation Future of Work program and was co-authored by Nitesh Chawla, the Frank M. Freimann Professor of Computer Science and Engineering at the University of Notre Dame and the founding director of the University’s Lucy Family Institute for Data and Society; Grigorii Khvatski, a doctoral student in Notre Dame’s Department of Computer Science and Engineering and a Lucy Family Institute Scholar; Yong Suk Lee, associate professor of technology, economy and global affairs in Notre Dame’s Keough School of Global Affairs and program chair for technology ethics at Notre Dame's Institute for Ethics and the Common Good; Maria Gibbs, senior director of Notre Dame’s iNDustry Labs; and Robert Landers, advanced manufacturing collegiate professor in Notre Dame’s College of Engineering.

In the paper, Angst and the other researchers introduced an MLLM, WeldPrompt, and evaluated how well it assessed weld quality in three manufacturing sectors: RV and marine, aeronautics and farming. They did this by compiling a dataset of images of welds — some collected from pictures available online and others from real-world or “in-the-wild” photos taken on shop floors. WeldPrompt then identified which welds were acceptable for a number of applications. Its assessment was then compared to that of a human weld expert.

WeldPrompt performed quite well at identifying quality welds when analyzing online images, which was unsurprising since the MLLM’s training data came from online images. Presumably, it would have memorized the images and any metadata or information associated with the images. What was surprising, however, was its ability to properly assess the in-the-wild welds not previously encountered.

“I hesitate to say it was 'good,' but it was at least decent at analyzing the quality of a weld taken in the wild,” said Angst. “And as I began experimenting with it at home, it actually surprised me by identifying other things without me asking it to.”

For example, when he showed the MLLM a weld on his high-performance bicycle, it not only identified it as a high-quality weld, it also recognized that it was located in the bottom bracket area of a bicycle. It did the same thing with an old, painted steel weld located on the post of a basketball hoop.

“This tells us that there are opportunities on shop floors to use MLLMs,” Angst said, the most obvious being quality control. “There already is human quality control in everything we're talking about here, but if the AI is trained to the extent that it can identify very precise requirements for welds, and then assess whether those welds meet those requirements, it potentially takes a human out of that role.”

Just as the discourse around AI and office work inevitably leads to concerns about job losses for humans, the usage of MLLMs in manufacturing brings up the same concerns. Angst, though, is dubious that such an outcome is guaranteed. Rather, he believes that MLLMs can introduce more opportunities for people.

“There's no question AI will replace certain jobs, but I also think that it will provide an additional tool for human beings that allows us to be better at our jobs,” he said. “Ultimately, I think it's going to be able to recommend better practices, too.”

Once an AI system has learned all the variables that go into a task such as welding, it can combine that training with its knowledge of what the optimal outcome is and potentially develop a more efficient approach to the task. Even human experts in the field might not have considered this approach yet, or have the capacity to ever develop it without AI.

“Humans have blinders, or restrictions, that AI doesn’t have,” Angst said. “I wouldn't be at all surprised if one day AI can tell specifically which of the 25 welders in the shop actually did the weld based on the quality or certain anomalies in the weld.”

Taken to the next level, this kind of MLLM training could mean AI might one day be able to identify the exact cause of a plane crash or car accident, or even provide the mechanisms to prevent such disasters from ever taking place. Of course, value proposition isn’t everything, and unimaginable benefits mean unimaginable risks.

Studying the unintended consequences of new technology has always been the backbone of Angst’s research. In the past, he has studied how the introduction of certain information technologies in hospitals and doctors’ offices affected communication between providers and patients. In one study, he found that adding computers to exam rooms unexpectedly caused patients to feel alienated or ignored since their doctors were turning away from them to enter information into the computer.

“We weren’t expecting that, but there’s been a lot of work to fix that problem,” he said.

More recently, Angst has become interested in studying the effects of using ambient listening AI in doctors’ offices. This is when a patient approves of having AI record the audio of a medical visit. He expects the behavior of doctors to change in subtle ways, for example, they might articulate things they wouldn’t have in the past because they want to be sure the AI captures it. In the future, ambient listening might even be able to diagnose conditions based on respiratory sounds humans can’t detect.

“Some people are very skeptical of this and concerned about their privacy, and all of those things are real,” he said. “It is incumbent on all researchers to think about these ethical concerns as we're going into this new frontier.”

Angst acknowledges that unintended consequences are inevitable and mitigating negative outcomes is a top priority, but he stresses that this is why the ethical guidelines that govern academic research are so crucial.

“Our institutional review board here at Notre Dame, and I'm sure everywhere, is always thinking about these concerns and the potential misuses of AI,” he said. “These guidelines existed before AI, but now with AI, it's just amplified their effect.”

As the incoming chair of the IT, Analytics, and Operations Department, Angst is particularly excited to see where scholarship in the field goes next.

“We have the resources, we have the people, we have the innovative entrepreneurial mindset to be one of the top programs in this area of analytics,” he said. “I feel like we're positioned now where we can just take off.”

Latest Research