Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter
Like it or not, generative artificial intelligence is here to stay; the majority of students nationwide now use it for assignments at least occasionally. Policing AI use is nearly impossible, monitored in-class assessments prioritize quick thinking over deep thinking — and disadvantage neurodiverse and multilingual learners. And no take-home assignment, however creative or personal, is fully “AI proof.”
Yet just freely letting students use AI to generate ideas, explain difficult concepts and produce/revise writing removes the necessary “friction” upon which learning depends and accelerates cognitive decline.
So I have been attempting the “third option” recommended by both the Brookings Institution and the American Psychological Association: teaching AI literacy.
This year my 10th and 11th grade English students used AI itself as a text to advance their critical thinking skills. We still read novels and short stories, still engaged in discussions and wrote essays, but AI was now a regular part of our work together.
As we read, we examined how large language models’ recycled novel “analyses” mis-read and oversimplified complex literature, producing distillations that often lacked nuance compared with the creative, discursive yet defensible readings that the students themselves generated. They learned to discern actual analysis from simplistic summaries, and to suspect the allure of AI’s instant “correct answers.”
As we engaged in literary discussions, we sometimes invited chatbots into the conversation; many students described these interactions as “bizarre” and “disjointed,” adequate for reviewing plot but too circular or directionless for genuinely provocative dialogue. ChatGPT’s sycophancy in particular tended to kill the necessary tension for true debate. One student “started purposely saying dumb things just to see how GPT would still find a way to say `great idea.’ It just felt so fake.”
As we wrote, we compared LLM-generated essays with human-generated ones, teasing out how AI’s “sophisticated-sounding” yet “generic” prose differed from the “messier” but ultimately, in the students’ judgment, more engaging language they themselves created. In a world where everyone has access to LLMs, these students were discovering the value of developing genuine voice. I hope at least some emerged thinking ChatGPT was best reserved for inter-office memos and letters to one’s utility company.
Two New Reports Urge ‘Human-Centered’ School AI Adoption
As we researched, we studied how AI search summaries — which users are now 58% less likely to look beyond — don’t actually represent internet searches, but instead reflect word proximity within a static corpus of text, a corpus lacking access to paywalled scholarly research and therefore drawing disproportionately on unregulated chat forums.
Students examined whether LLMs accurately reported their sources and to what extent AI drew from ideologically extreme sites. They saw how wording a query — e.g., “is abortion safe” vs. “is abortion murder” — could lead to politically-slanted results based on what the AI thought they wanted to see, and how sources often said something very different than AI summaries claimed they did.
As we took and organized notes, students compared their manual note-taking process to the output of AI note-taking tools, learning how what we choose to include or exclude in summarizing notes, how we use emphasis and phrasing — did Africa under colonialism “fuel worldwide industrial production” or were African resources and peoples “exploited for the benefit of Western industrial profit” — create and propagate different narratives.
These narratives do not arise accidentally; “what is ranked at the top” of AI searches “is ultimately influenced by the priorities of LLMs’ shareholders,” so we studied studying the politics of AI magnates like Sam Altman and Peter Thiel, learning how Google monitors and censors Gemini’s responses to political questions, and studying algorithmic bias (e.g., image generation requests for “doctor” returning mainly white males), all helped my students re-think their understanding that AI tools were neutral and simply utilitarian.
When we studied AI, we simultaneously studied neurological research about how humans, unlike LLMs, don’t just rely on pattern recognition, but also make intuitive leaps, and used Edward De Bono’s lateral thinking activities as practice. Students did something else that AI couldn’t: related classroom content to personal experiences.
One multilingual student recalled attending a business meeting with her father where he faltered, because he “[knew] that someone who has the ability to speak English better [me] sat right next to him… ‘It makes me want to depend on you’ he told me, ‘when I’m totally capable of doing so by myself.’ He did much better after I left.” The student then made the leap to consider how, even if AI help is readily available, perhaps we gain something by refusing to rely on it.
When I abandoned AI bans, I instituted AI audits. Students had to demonstrate their thoughtful, detailed evaluation of each AI tool they used, including knowledge of how it operated, what they felt they gained and lost by using it, how they verified accuracy of information, and how they had not relinquished their own thinking. The students didn’t necessarily conclude “AI is always bad,” but they did see that using it always requires vigilance. Best of all, they didn’t have to take my moralizing word for any of this; they discovered it for themselves.
Yes, I had to teach fewer novels in order to make room for AI literacy, but ultimately my job is not to teach novels; it’s to teach students. Their insights — how Grammarly’s “correcting” language altered integral parts of people’s unique voices, how personal evolution often comes from struggle and discomfort, how our desire for ease can hold us back from achieving our potential, how dangerous it is to invest authority in words just because they emerge from a machine — were equally valuable as any takeaway they gleaned from novels. And this time I knew those takeaways were theirs, not ChatGPT’s.
I teach an affluent population, but teachers are employing similar critical AI literacy lessons with more economically and linguistically diverse learners. To be sure, my experience was often fraught. Some of my less-confident students never stopped considering LLMs’ “clear” and “well organized” writing superior to their own, and still hesitated to trust their own readings of literature over “the answers” ChatGPT offered.
I struggle with asking students to critically evaluate AI while their own linguistic and analytic skills are still developing, but I also know I cannot create the conditions that allow teenagers to become master writers and thinkers before they are exposed to AI; they will soon arrive at my classroom having been using it since childhood.
Students Want Schools to Incorporate AI in Learning But Express Some Fears
Post-pandemic learning science suggests that, when teaching anything, we cannot wait for students operating well-below grade level to “catch up” before introducing higher order thinking skills; we have to figure out how to teach both simultaneously.
That requires creativity, and creativity is what makes humans superior to AI, which can only regurgitate already-created ideas. Teachers excel at creativity; every day we come up with new ways to meet the ever-changing needs of our students, and right now AI literacy is one of those needs.
Research suggests that this training is crucial for keeping AI users — a population swiftly becoming synonymous with “humans beings” — from engaging in “cognitive surrender, marked by passive trust and uncritical evaluation of external information,” as opposed to “cognitive offloading, which involves strategic delegation of cognition during deliberation” when using AI.
Doom-prophecies about AI rendering English classes obsolete forget that the humanities are about studying what is human about us — including both our criticality and our adaptability.
Note: This is an abridged, non-scholarly version of a peer-reviewed article slated for publication in Issue 115.6 of NCTE’s English Journal.
Did you use this article in your work?
We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how
