Several AI-powered platforms aimed at fostering civil dialogue have emerged in recent years.
Photo illustration by Justin Morrison/Inside Higher Ed | benoitb, ibenk.88, Kateryna Onyshchuk and Lacheev/iStock/Getty Images | triloks/E+/Getty Images
Over the past few years, higher education institutions have adopted emerging artificial intelligence tools in an effort to enhance nearly every aspect of campus life—not just teaching and learning but also admissions, alumni networks, fundraising and advising. Now some are even experimenting with AI’s ability to advance one of the hottest trends on college campuses: fostering constructive dialogue among students, who are more divided over politics now than at any point in the past 40 years.
That’s largely a reflection of the broader political polarization that has plagued American society over the past decade, a dynamic that intensified on college campuses over the pro-Palestinian protests that broke out during the Israel-Hamas war. Indeed, the share of students who said they were uncomfortable sharing their political views on campus climbed from 13 percent to 33 percent between 2015 and 2024.
To help bridge those divides, colleges are increasingly partnering with organizations aimed at promoting civil dialogue, including Braver Angels, BridgeUSA, the Institute for Citizens and Scholars, and the Constructive Dialogue Institute. And lately, AI is becoming part of the conversation.
“Most of the dialogue programs out there aren’t scalable,” said Mylien Duong, chief impact officer of CDI, which launched in 2017. “The power of AI is that it can provide coaching and feedback in real time without having to rely on human power.”
A host of AI-powered constructive dialogue platforms have emerged in recent years; CDI is piloting an AI-enabled component for its Perspectives learning program, which uses a mix of peer-to-peer conversation and online learning modules to equip students with the skills they need to have tough conversations.
The program’s new AI chatbot seeks to further that mission by coaching students on actively listening to a person with an opposing view, expressing their views without becoming defensive or upset, and finding common ground amid fundamental disagreement. A chatbot presents students with a hypothetical scenario—ranging from a roommate dispute to a heated debate about abortion, immigration or war in the Middle East—and gives them feedback on their responses.
“Having difficult conversations with real people in real time can be hard,” said Lindsay Hoffman, an associate professor of political communication at the University of Delaware, who is beta testing CDI’s AI tool with her students this semester. “The AI component creates a safe space where students can express ideas that they may not feel comfortable expressing to another human being.”
So far, her students say it’s been mostly helpful.
“You got to practice without the fear of messing up,” wrote one student on an anonymous feedback survey. “The practice coach made the lessons feel interactive and realistic,” wrote another. “It gave clear prompts and scenarios that helped me actually practice skills like perspective taking, looping, and asking constructive questions instead of just reading about them. It also helped me slow down and think about how to respond in a calmer, more respectful way.”
However, AI has limits as a constructive dialogue tool, according to a white paper CDI published late last month. In short, the more freedom a generative AI tool has to shape conversations, the riskier it becomes.
The paper identified three roles that emerging AI-powered constructive dialogue tools have assumed: In addition to the coach, which helps students build individual dialogue skills, AI can act as a mediator, seeking to facilitate conversations across differences, or a conversation partner that engages students in disagreement.
“Across those roles, we see the most benefits and fewest risks when AI’s role in dialogue is most constrained and pedagogically focused,” said Ryan Carlson, a research scientist at CDI and author of the paper. That means programming chatbots to give focused prompts and meaningful feedback that pushes students to develop their dialogue skills—and doesn’t just tell them what to say.
“Coaches are the most promising path for beginning this process of institutions starting to engage in a more intentional way with AI on campus,” Carlson added. “The vast majority of students are already using AI without any guardrails, and the evidence suggests it’s essential that we start investing more in evidence-based tools that are built by educators that can help ensure students experience the friction required to learn dialogue skills.”
Risky AI Mediator, Adversary
While the coaching role poses minimal risks—which are mostly tied to easy-to-remedy design and interface issues—deploying AI as a mediator or discussion partner presents deeper, existential hazards, according to the paper.
When tasked with mediating a live conversation between two students, “the AI becomes responsible for determining what counts as an appropriate topic for dialogue,” Carlson said. For instance, “It runs the risk of making false equivalencies. AI could potentially treat as equivalent a well-evidenced claim and one that is based on very little empirical support, whereas a trained human mediator would be able to detect that problem right away.”
It’s also not clear how well an AI mediator can handle heated conversations or recognize when human oversight is warranted, he added. But the riskiest of all of those roles is using AI as a debate partner; Carlson said the bot often struggles to offer “an accurate representation of someone holding [an opposing] belief.” That’s in part because, as other research shows, AI is far more persuasive than the average human—even when it relies on false or misleading information to craft its arguments.
“If AI is capable of changing our minds in a way that’s more powerful than humans and it’s then deployed at a massive scale, we’re opening up a can of worms,” said Duong of CDI. “Who gets to say what’s in bounds and what’s out of bounds, what a conspiracy theory is and is not, or what’s evidence based and what’s not? The AI has to point in a specific direction, and behind that specific direction is a set of human designers. It concentrates a lot of power in the design of the AI.”
