As institutions continue to grapple with the rise of artificial intelligence, the University of Richmond is investing in a cross-campus initiative to integrate AI into the student experience while addressing its ethical and societal implications.
Launched last fall, Richmond’s Center for Liberal Arts and AI aims to bridge AI with the critical inquiry and humanistic values of the liberal arts. A key piece of the initiative is engaging both student and faculty fellows in course development and campus programming focused on AI.
Lauren Tilton, center director, said the initiative is designed to embed AI into the learning experience in ways that are both practical and critically informed.
“To be a critical thinker in a liberal arts setting is to be creative and innovative and work with the latest technologies, but also to be an informed and critical user of them,” she said. Tilton noted that AI is not new—tracing back to academic research in the 1950s at Dartmouth University—and said that understanding its history is key to grappling with where it’s going.
“You can think even 15 years ago, people were worried about robots coming to get us all,” she said. “Even the way we think about AI and the concerns we have about it have changed over time.”
The initiative ranges from speaker series to workshops on integrating AI across disciplines, from the humanities to the social sciences. The center is also working to help faculty access professional development opportunities and create structured ways to share resources with neighboring institutions.
“We’ve been bringing in speakers to discuss how AI is being integrated across fields and industries, so students can hear directly from professionals and explore creative ways to use these technologies,” Tilton said.
The approach: Richmond is partnering with the Associated Colleges of the South to support a fellows program that brings together 23 faculty members from institutions across the region, including Washington and Lee University and Rhodes College. The goal is to explore pressing social, cultural and legal questions surrounding AI.
“The fellows have been really helpful in sharing what’s happening across institutions and what faculty need,” Tilton said. “It’s given us insight into how other colleges are structured, the policies they’re considering and how they’re engaging students in the classroom.”
“They’re experimenting with new technologies and asking, ‘When do we use them? When do we not?’” she added. “The partnership is really more of a conversation between schools.”
Richmond’s approach reflects a broader push across higher education to integrate AI into teaching and learning while emphasizing critical thinking and ethical use.
At Cornell University, researchers have created an online module aimed at helping students build critical thinking skills in the age of AI. The asynchronous, 75-minute module provides students with a shared language and foundational framework for critical thinking while helping instructors across disciplines connect those skills to course content.
Other institutions, such as Bryn Mawr College, have turned campus libraries into AI sandboxes—shared spaces for experimentation and ethical use. There, librarians facilitate workshops and one-on-one consultations with faculty and students on AI literacy and classroom applications.
“At Richmond, we’re not just helping students learn to work with these technologies; we’re also making sure they develop [AI] literacy along the way,” Tilton said.
Beyond academic integrity: Tilton said one key takeaway from launching the Center for Liberal Arts and AI is that both students and faculty are eager to move conversations about AI beyond academic integrity and toward fostering trust and collaboration.
“Students are using these technologies, and they’re becoming part of the fabric of their academic life,” she said. “Rather than focusing on academic integrity and cheating, we would be better off approaching this with empathy and generosity, working alongside students.”
Ultimately, Tilton said, higher education leaders should avoid assumptions about how students are using AI and engage them directly.
“The more we approach this collaboratively and from a place of trust—while also critically evaluating any new technology in a productive way—the better positioned we are to guide students.”
Get more content like this directly to your inbox. Subscribe here.
