Skip to main content
Graduate Student homeNews home
Story
2 of 20

Bots increase online user engagement but stifle meaningful discussion, study shows

Bots increase user engagement, but at the cost of deeper human-to-human interactions, according to new research from Notre Dame's Mendoza College of Business.
White male professor with short brown hair and beard smiling and wearing a dark suit and tie
John Lalor

Last July, Meta introduced AI Studio, a tool for users of Meta platforms Facebook and Instagram to create chatbots powered by artificial intelligence (AI). The bots can be used for specific tasks such as generating captions for posts, or more generally as an “avatar” — engaging directly with platform users via messages and comments. Tools similar to AI Studio have also been rolled out for Snapchat and TikTok.

In an interview with the Financial Times in December, Meta’s vice president of product for generative AI said, “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do. … They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform.”

As AI bots become more prevalent on platforms, especially bots that are able to generate new content, there are risks that these bots will share false information and overwhelm users’ social feeds with automatically generated content.

This sparked a discussion about the role of bots on social media platforms. Although Meta removed some of its internally developed AI bots from its platforms, there are still user-created bots on the platforms. Additionally, with heavy investment in generative AI (GenAI) technologies — software programs that use AI to create content and interact conversationally with users — firms may continue to look for ways to increase user engagement on platforms through the use of AI bots.

GenAI bots are not the only bots that can interact with users. Bot accounts on platforms such as Reddit and X follow a series of pre-programmed rules to interact with users or moderate discussion.

Within Reddit communities, those bot accounts profoundly influence human-to-human interactions, according to new research from the University of Notre Dame.

Bots increase user engagement, but at the cost of deeper human-to-human interactions, according to “The Effect of Bots on Human Interaction in Online Communities,” recently published in MIS Quarterly from John Lalor, assistant professor of IT, analytics and operations, and Nicholas Berente, professor of IT, analytics and operations, both at Notre Dame’s Mendoza College of Business, along with Hani Safadi from the University of Georgia.

White male professor with short brown hair smiling and wearing dark suit with blue shirt
Nicholas Berente (Photo by Matt Cashore/University of Notre Dame)

Recent work has identified a taxonomy of bots — a system of classifying and categorizing different types of bots based on their functionalities, behaviors and operating environments.

Bots can be very simple or very advanced. At one end of the spectrum, rules-based bots perform simple tasks based on specific guidelines. For example, the WikiTextBot account on Reddit replies to posts that contain a Wikipedia link with a summary of the Wikipedia page. The bot’s automated nature allows it to see every post on Reddit via an application programming interface (API) to check each post against its hard-coded rule: “If the post includes a Wikipedia link, scrape the summary from the wiki page and post it as a reply.” These bots are called “reflexive” bots.

Other bots on Reddit moderate conversations in communities by, for example, deleting posts that contain content that goes against community guidelines based on specifically defined rules. These are known as “supervisory” bots.

“While these bots are rigid because of their rules-based nature, bots can and will become more advanced as they incorporate generative AI technologies,” said Lalor, who specializes in machine learning and natural language processing. “Therefore, it’s important to understand how the presence of these bots affects human-to-human interactions in these online communities.”

Lalor and his team analyzed a collection of Reddit communities (subreddits) that experienced increased bot activity between 2005 and 2019. They analyzed the social network structure of human-to-human conversations in the communities as bot activity increased.

The team noticed that as the presence of reflexive bots (those that generate and share content) increases, there are more connections between users. The reflexive bots post content that facilitates more opportunities for users to find novel content and engage with others. But this happens at the cost of deeper human-to-human interactions.

“While humans interacted with a wider variety of other humans, their interactions involved more single posts and fewer back-and-forth discussions,” Lalor explained. “If one user posts on Reddit, there is now a higher likelihood that a bot will reply or interject itself into the conversation instead of two human users engaging in a meaningful back-and-forth discussion.”

At the same time, the inclusion of supervisory bots coded to enforce community policies led to the diminished roles of human moderators who establish and enforce community norms.

With fewer bots, key community members would coordinate with each other and the wider community to establish and enforce norms. With automated moderation, this is less necessary, and those human members are less central to the community.

As AI technology — especially generative AI — improves, bots can be leveraged by users to create new accounts and by firms to coordinate content moderation and push higher levels of engagement on their platforms.

“It is important for firms to understand how such increased bot activity affects how humans interact with each other on these platforms,” Lalor said, “especially with regard to their mission statements — for example, Meta’s statement to ‘build the future of human connection and the technology that makes it possible.’ Firms should also think about whether bots should be considered ‘users’ and how best to present any bot accounts on the platform to human users.”

Contact: John Lalor, 574-631-5104, john.lalor@nd.edu

Originally published by Shannon Roddel at news.nd.edu on January 29, 2025.

Latest Research