We are ready for an exciting set of conversations about AI and higher education. We want to talk with faculty about what Ethan Mollick’s idea of “co-intelligence” means in practice in the classroom. We need to be engaging the employers of our graduates to redesign the curriculum. We are even ready for difficult conversations like what happens when a professor wants a class to use AI but some students have ethical objections.
These conversations are happening among the influencer class of AI writers and thinkers, but not as much in the day-to-day work with faculty. This past year, we’ve realized there are a set of conversations and interventions we have to move beyond if we want to get to the good stuff.
The artificial intelligence era in higher education seems too new for it to be overrun with myths and outdated advice, but here we are. Popular blogs highlight archaic tricks for detecting AI writing, and teaching and learning websites recommend tactics that have not worked for more than a year. Some of this is to be forgiven—we have never seen a technology move this fast, so adaptation is ongoing rather than a one-off event.
As faculty members who are deeply invested in understanding the good and bad of AI in modern classrooms, our goal here is to discuss some of the more prominent and stubborn myths that persist in conversations about AI and teaching and learning.
I Know It When I See It
Guides to spotting AI-produced work continue to exist and be posted on college and university websites. Tech publications and journalists continue to publish advice on this as well. Casey Newton from Platformer and Hard Fork recently said of AI writing, “I can always tell.”
This is empirically untrue. Study after study confirms that humans are incredibly bad at detecting AI writing: It’s more the case that we can spot people who are bad at using AI. Consider some of the advice common in these guides, such as that an abundance of em dashes or bullet points indicates likely AI involvement. You might well find students using lots of em dashes and in conversation discover they have used AI. However, this is just selection bias since you never knew about the other students who also used it but whom you didn’t identify. Spotting unauthorized AI use ultimately becomes a process based on vibes that necessarily introduces instructor bias into another aspect of teaching.
AI Can’t Do Personal Reflection
This was a key component of a popular article in The Atlantic from the fall in which the author, the linguist John McWhorter, posited, “I have also found ways of posing questions that get past what AI can answer, such as asking for a personal take … that draws from material discussed in class.” This belief that asking for personal reflection is a form of AI-proofing is remarkably widespread, even on centers for teaching and learning websites.
But this is just basic prompting. Say you assign a personal reflection that asks students to critically think about how they are shaped by social inequalities (race, gender, class) in their everyday lives. Student A might copy the assignment instructions into an LLM and write to the LLM, “Do this assignment for me.” Student B copies the instructions but in this case the student adds personal details about themself such as race and ethnicity, income, age, place of residence, and such. We strongly suggest everyone try this out for themselves with their own assignments. There are even websites that allow users to “humanize” their AI-generated text, meaning you may not even need to be good at prompting to do this.
The Calendar Trick
The 2023 versions of ChatGPT were trained on data only up to 2021. So a popular tactic early on was to ask questions about current events that the systems did not have access to. This has somehow persisted as an approach. It is something we have both encountered working with faculty, and it even shows up in modified form in some guides, like this one from Montclair State University that recommends connecting assignments to “very recent events” as that is context “AI will not have.”
Obviously, this advice is no longer viable, as some models like Grok and Llama are fully integrated into real-time social media networks and other providers are in negotiations with news outlets to have breaking news–informed outputs.
Trojan Horse
In 2024, a teacher went viral with the strategy of embedding hidden instructions in an essay prompt using small, white font. The idea is that the student will unwittingly copy and paste the whole assignment guide into the AI platform, which will read the hidden prompt. The resulting output will incorporate something the instructor called for, like incorporating references to a movie or a theorist not covered in class. Of course, this would also work with STEM prompts and result in errant pieces of code or components of a proof that are unnecessary.
At best this is “gotcha” teaching, where instructors move from teachers to pranksters. Is this the role we want to be playing? If your goal is to catch students in the act, you’re not starting out from a place of good pedagogy. This is also an unreliable approach with a limited shelf life. The Trojan horse is basically a variation of a prompt-injection attack, which is a known security issue for LLMs that engineers are actively trying to resolve. The greater risk is that if an instructor relies on this as a tactic, it immediately absolves everyone who passes their “test.” Even if the trick works, you aren’t spotting all the students who are using the tools: You are only spotting students who are bad at using them.
Claude’s Not in Our Classroom
Some universities recommend tying assignments closely to classroom discussions. Unlike the other approaches, this one is founded in good pedagogy. This sort of woven approach to instruction is absolutely a best practice—but it’s not an effective way to deter the use of large language models.
A student can simply add the context from the classroom discussion to the prompt and the resulting output will incorporate it. Yet we do think there is merit here, insofar as we view this as a best practice in teaching students how to prompt well.
Trying to get students to not use these tools seems to be a losing battle, and we are not even convinced it is one worth fighting. We did not even touch on the necessity of working with these tools to prepare students for a competitive job market where employers increasingly expect AI literacy and tool usage. The point of this piece was simply to clarify a series of misconceptions and the persistence of outdated advice.
We recognize that people are in different places with AI depending on the ethical questions they have and how much they have used the tools. There are plenty of unsettled questions about sustainability, learning loss, human creativity and labor. We are ready to have more of those conversations. We would prefer to have fewer instances where we have to break the news to one of our colleagues that the “one neat trick” they learned from TikTok two years ago doesn’t work anymore.
Zach Justus is director of faculty development and a professor of communication arts and sciences at California State University, Chico.
Nik Janos is a professor of sociology at California State University, Chico.
