Social media platforms aren’t doing enough to stop harmful AI bots, research finds
While artificial intelligence (AI) bots can serve a legitimate purpose on social media — such as marketing or customer service — some are designed to manipulate public discussion, incite hate speech, spread misinformation or enact fraud and scams. To combat potentially harmful bot activity, some platforms have published policies on using bots and created technical mechanisms to enforce those policies.
But are those policies and mechanisms enough to keep social media users safe?
New research from the University of Notre Dame analyzed the AI bot policies and mechanisms of eight social media platforms: LinkedIn, Mastodon, Reddit, TikTok, X (formerly known as Twitter) and Meta platforms Facebook, Instagram and Threads. Then researchers attempted to launch bots to test bot policy enforcement processes.
The researchers successfully published a benign “test” post from a bot on every platform.
“As computer scientists, we know how these bots are created, how they get plugged in and how malicious they can be, but we hoped the social media platforms would block or shut the bots down and it wouldn’t really be a problem,” said Paul Brenner, a faculty member and director in the Center for Research Computing at Notre Dame and senior author of the study. “So we took a look at what the platforms, often vaguely, state they do and then tested to see if they actually enforce their policies.”
The researchers found that the Meta platforms were the most difficult to launch bots on — it took multiple attempts to bypass their policy enforcement mechanisms. Although the researchers racked up three suspensions in the process, they were successful in launching a bot and posting a “test” post on their fourth attempt.
The only other platform that presented a modest challenge was TikTok, due to the platform’s frequent use of CAPTCHAs. But three platforms provided no challenge at all.
“Reddit, Mastodon and X were trivial,” Brenner said. “Despite what their policy says or the technical bot mechanisms they have, it was very easy to get a bot up and working on X. They aren’t effectively enforcing their policies.”
As of the study’s publishing date, all test bot accounts and posts were still live. Brenner shared that interns, who had only a high school-level education and minimal training, were able to launch the test bots using technology that is readily available to the public, highlighting how easy it is to launch bots online.
Overall, the researchers concluded that none of the eight social media platforms tested are providing sufficient protection and monitoring to keep users safe from malicious bot activity. Brenner argued that laws, economic incentive structures, user education and technological advances are needed to protect the public from malicious bots.
“There needs to be U.S. legislation requiring platforms to identify human versus bot accounts because we know people can’t differentiate the two by themselves,” Brenner said. “The economics right now are skewed against this as the number of accounts on each platform are a basis of marketing revenue. This needs to be in front of policymakers.”
To create their bots, researchers used Selenium, which is a suite of tools for automating web browsers, and OpenAI’s GPT-4o and DALL-E 3. The research, published as a pre-print on ArXiv, was led by Kristina Radivojevic, a doctoral student at Notre Dame, and supported by CRC student interns Catrell Conley, Cormac Kennedy and Christopher McAleer.
Contact: Brandi Wampler, associate director of media relations, 574-631-2632, brandiwampler@nd.edu
Latest ND NewsWire
- University of Notre Dame and FIA team up to reduce online abuse in sportsThe University of Notre Dame has announced a research collaboration with the Fédération Internationale de l'Automobile to lead an initiative addressing the rising threat of online abuse in sports. As the governing body for world motor sport and the federation for mobility…
- Notre Dame Research, Under Armour reach historic partnership to pursue innovations in materials, data analytics and human performanceOver the next decade, both organizations will co-invest in research initiatives that span multiple colleges and disciplines, and allow Notre Dame’s faculty, staff and student researchers to work alongside Under Armour personnel to identify research questions and design solutions for impact on campus and beyond.
- Internationally recognized physician Tom Catena to visit Notre DamePhysician, humanitarian and medical missionary Dr. Tom Catena will visit the University of Notre Dame on Nov. 12 (Wednesday) to deliver the 2025 Rev. Bernie Clark, C.S.C., Lecture at 5 p.m. in the Eck Visitors Center Auditorium. Catena’s lecture, titled “Hope and Healing,” is also part of the 2025-26 Notre Dame Forum, which is organized around the theme “Cultivating Hope.”
- On the eve of COP30 in Brazil, Notre Dame convenes faculty in São PauloSince its founding, the University of Notre Dame has sought to address the world’s most pressing challenges through scholarship, partnership and service. Responding to the growing urgency of environmental change requires precisely this kind of collaboration, bringing together universities, researchers and communities to create solutions that are just, sustainable and grounded in shared responsibility for our planet. This November, COP30 will convene in Belem, Brazil. Capitalizing on Notre Dame’s presence in São Paulo, Notre Dame Global and Notre Dame São Paulo will host a conference together with Notre Dame Research and the Notre Dame Deloitte Center for Ethical Leadership (NDDCEL), the week before the international climate summit.
- Catholic Peacebuilding Network releases new report on global mining, using Catholic social teaching lensNotre Dame's Catholic Peacebuilding Network released a new report, Catholic Approaches to Mining: A Framework for Reflection, Planning, and Action, a nearly 50-page report identifying the problems associated with mining — social, economic and environmental among them — and analyzing these issues through Catholic social teaching to provide a path forward for mining-affected communities.
- Karen Deak named executive director of Notre Dame’s IDEA CenterKaren Imgrund Deak has been selected as executive director of the IDEA Center at the University of Notre Dame, where she has served in the interim role since September of 2024. She will lead the unit and oversee the roll out of its recent strategic reorganization. Deak brings to the role knowledge of developing partnerships at the IDEA Center, across the University, and beyond.








