The world’s largest professional organization of writing educators disagrees with the notion that the rise of generative artificial intelligence in the classroom is unavoidable.
Earlier this month, the Conference on College Composition and Communication passed a resolution affirming the rights of students and faculty to refuse the use of generative AI in the writing classroom. “Unsubstantiated claims about how generative AI increases productivity” and a string of other concerns underpinned the resolution including the technology’s corrosive implications for data privacy, labor rights, academic freedom, the environment and the critical thinking skills humans develop through the process of writing.
“The work of college writing instruction should be attentive to industry trends—among many other external factors—but not driven by the goal of workforce preparation through a narrow focus on specific technological skills,” reads the resolution. “As a profession, rhetoric, composition, and writing studies is committed to preparing students to write in a world that is bigger than just work. We understand that students learn to write to navigate uncertainty, gain access to resources, make sense of phenomena, connect with others, build community, process feelings and experiences, and engage in civic participation.”
The resolution, which the CCCC overwhelmingly approved at its annual convention in Cleveland two weeks ago, reflects the writing education community’s support for the right to opt out of using generative AI in the classroom, according to Jennifer Sano-Franchini, an associate professor of English at West Virginia University and immediate past chair of the CCCC.
“This is an academic freedom issue, and students and teachers should be able to make a choice. That’s something that’s being denied when people say things like, ‘You just have to use it,’ ‘It’s here to stay’ or ‘Students need to be able to use it for their careers,’” she said. “Those are all claims we can unpack more, but I’m not particularly convinced.”
The CCCC’s resolution comes three-plus years after OpenAI launched ChatGPT—which can generate research papers, essays and fictional stories in seconds—precipitating the current wave of partnerships between higher education and profit-driven tech companies.
At first, ChatGPT and other generative AI tools sparked fears among educators that cheating would become easier, more common and harder to catch. While data shows that is indeed happening, colleges and universities have also been inundated with the tech sector’s predictions that generative AI will wipe out many entry-level white collar jobs—and its claims that AI-savvy job seekers will have a leg up.
“I felt pressured to learn about it and look into it,” Sano-Franchini told Inside Higher Ed. “Over time, I noticed some students using it inappropriately … Now I don’t ban it, but I don’t encourage it.”
Instead, Sano-Franchini—whose research explores the intersection of culture, power and technology—crafts writing assignments that may be difficult for a large language model to complete, including by incorporating elements of previous class discussions. But she’s aware that other faculty members may be making different choices about how to integrate AI into their teaching, and she’s worried about what students are missing out on as many become more reliant on such tools to write.
“These companies and the marketing they use prey on people’s writing anxieties. Writing is hard, and I can see why [offloading it to an LLM] is appealing to some people,” Sano-Franchini said. “But when people are not taking the time to read and understand what other people are saying and the arguments they’re making, it’s really difficult to have a shared conversation about a topic and develop our thinking about it.”
Some of her students—who, she says, have become increasingly negative about generative AI’s hold on modern culture—are also grappling with the implications of refusing the technology.
Colleen Benison, a master’s student studying writing and editing at WVU, said that while her program has been insulated from pressure to adopt generative AI, she knows it’s very much present for other students. And they should have the ability to opt out, she told Inside Higher Ed.
“If higher education is about gaining new knowledge and sharpening critical thinking skills and contributing to scholarly conversations, students are actively neglecting these things when they use AI,” said Benison, who doesn’t use generative AI at all. “There’s rhetoric about inevitability and not being left behind, but there’s more value in rediscovering why human intelligence is so valuable. I don’t think we’re being left behind by refusing it.”
‘Profiteers and Opportunists’
Despite the qualms of some students and faculty, many colleges and universities are rushing to hop on the generative AI bandwagon. Some are paying big money.
Numerous institutions, including Arizona State University, the California State University system and the University of Colorado at Boulder, have signed multimillion-dollar deals with tech companies to offer students and faculty access to proprietary generative AI tools in the name of workforce development and AI literacy.
Students and faculty have reported that they’re often left out of those decisions, yet don’t have a choice when it comes to using generative AI.
According to a 2025 survey by the American Association of University Professors, 15 percent of faculty said their college or university mandates the use of AI, and 81 percent said they’re required to use learning management systems and other education technology embedded with AI tools that they can’t turn off. At the same time, 69 percent said AI is hurting student success and 95 percent stressed the importance of implementing meaningful opt-out policies.
The CCCC’s resolution says there’s agency in refusing AI.
“Refusal of generative AI enables us to take a step back from the compulsory opt-in culture that has become ubiquitous through Big Tech, and it (re)opens possible rethinking around how we interact with and engage corporate proprietary technologies that involve profiting from student and teacher data and intellectual labor, including plagiarism detection software, learning management systems, and telecommunication technologies,” it reads.
The CCCC isn’t the first professional academic society to publicize its concerns about the threats generative AI poses to teaching and learning, though others have stopped short of granting faculty and students permission to flat out refuse the technology.
“Students of all kinds already rely on generative AI tools and will continue to do so,” reads the American Historical Association’s guiding principles for AI in history education. “Some committed educators have chosen to reject generative AI for its ethical, environmental, and economic consequences, but ignoring this technology will neither halt its spread nor shield our discipline and students from its reach.”
The CCCC intentionally left its resolution free of assumptions about the inevitable spread of generative AI. “We’re not saying you have to use it or you can’t use it,” Sano-Franchini said. “For a long time, people were made to feel like they don’t have a choice—if they didn’t want to use it, they’re putting their heads in the sand and avoiding the inevitable. But we’re trying to resist that and say there are really good reasons for not wanting to use it and here’s why.”
Even if other disciplines haven’t explicitly endorsed opting out, the CCCC has joined a growing opt-out movement that many individual faculty support.
Last summer, more than 1,000 education professionals from universities across the globe signed an open letter in support of refusing “the call to adopt GenAI in education,” describing it as “a threat to student learning and wellbeing” fueled by “a massive marketing push to position these products as essential to students’ future livelihoods” despite “insufficient evidence” that they lead to learning gains.
For academics and others who want to advance the AI-resistance movement, focusing on the right to refuse the technology and directing criticism toward tech companies offers the best path forward, said Sonja Drimmer, an associate professor of medieval art and architecture at the University of Massachusetts at Amherst who has written at length about resisting generative AI in education.
“Worries about plagiarism are distractions to pit teachers against students so that we forget that our actual opponents in this battle are profiteers and opportunists,” she said. “The word ‘inevitability’ has long been used to defuse and deflate any kind of resistance or rejection to anything. It’s important to ask who is promoting that narrative and why.”
And while those and other questions about generative AI’s ability to improve student outcomes remain largely unanswered, the higher education sector should take time to first question where the pressure to adopt the tools now is coming from, Drimmer added.
It’s also why she believes the CCCC’s resolution offers such an effective defense against that pressure.
“Urgency is meant to flood the mind of the customer so they can’t take a pause to consider whether the thing that’s being sold really needs to be bought,” she said. “I see no need for urgency. I understand that the phrase ‘But we’re going to fall behind’ can be very convincing. But no one is really asking, ‘Fall behind what?’”
