Not a day goes by at my university without news popping up in my inbox regarding AI, from pedagogy to policy. Everyone who’s been at the front of a classroom or been on an education committee has advice to share. But what do they—what do you—know? To that end, there’s a test at the end of this essay. But before that, a little history:
The AI News Bulletin, as I think of it, started way back when ChatGPT was a baby that provided pablum responses and made adorable mistakes. Invariably, a professor would thunder about this silly device that did nothing and whose bland output was easily detectable.
That outlook changed as AI improved markedly, from Claude and its ilk to the new, improved Grammarly, which goes far beyond grammar and style.
The postings then split into two types:
- Gloom-and-doom academics wringing their collective hands over how this Frankenstein device was going to wreck education.
- Cheerful, self-congratulatory types who had somehow managed to implement AI in the classroom and could hardly wait to tell everyone.
It’s startling how many of the first type have never really tried out AI (just ask). And it’s disheartening how many self-proclaimed innovators don’t seem to realize how many of their students are relying on AI in non–pedagogically sanctioned ways. As a bright, articulate student responded when I asked why she’d rely on such assistance, “Well, it’s there.”
As AI became both pandemic and the new normal, two new voices entered the discussion:
- A new type of old-time professor who claimed to have solved the AI issue by either restricting all student work to in-class responses on paper or by seducing their students with the pleasures of reading and writing
- A larger-picture pseudo pundit (rarely with any relevant credentials) who had a lot to say about ethics and the proper use of AI.
Most of them are delusional.
The restrictive type won’t admit that, with only blue books available, gone are the days of research papers and any other complex project that can’t be completed during class time.
The joy-of-humanities alt-type claims that having the students encounter Great Books will lure them away from artificial reading aids and that the experience will enable them to read prodigious amounts when just yesterday they didn’t seem able to read 20 pages a week. They also won’t admit that their examples of student self-reliance are cherry-picked, as recent studies of AI use among students indicate, or that they teach in exclusive colleges with class enrollments so small that even their 100-level literature courses are conducted like a zetetic graduate seminar. To that end, the solution “I interview each student about what they’ve written” is maddeningly obtuse about how most classes run.
Confronting AI as inevitable is more realistic, but such discussions inevitably lead to “the ethical use of AI,” to repeat what so many academics–turned–policy wonks call it. To cut through a great deal of speechifying: When is using AI OK and when is it bad? The divisive point is whether you’re using AI for informational or generative results. But where is the actual dividing line? What’s the difference between editing suggestions from a human or an AI? When does one level of assistance become a higher level? “Actually writing it for someone,” you might say, but what if AI merely makes suggestions, some of which you take and others that you reject? Not coincidentally, the same issues cloud cases of plagiarism, another malfeasance that became enormously easier when you didn’t have to locate a source and retype the words.
If you’re one of the prescriptivists or proscriptivists or ethicists weighing in on these issues, I invite you to take this test:
Multiple Choice: What’s the difference between
- checking a thesaurus for a synonym
- asking a friend
- typing it as a query for an AI
- asking a friend to look over a manuscript
- paying a freelance writer to do that
- asking Claude to make suggestions
- writing up a committee report
- collaborating on a report with other committee members
- asking ChatGPT to write up the report after feeding it the minutes
- Googling girls’ names for your upcoming baby
- checking an old phone directory for names
- asking AI for a name based upon the desired attributes of the baby
- getting medical advice from a doctor
- getting advice from a medical website
- getting medical advice from an AI
- “Siri, make a list of restaurants near me.”
- “Siri, make a list of restaurants near me, ranked in order of positive reviews on Yelp.”
- “Siri, knowing what kind of food I like to eat, suggest some appropriate restaurants near me.”
Essay questions:
Is relying on AI the same as plagiarizing from only one other source?
Is the ethical or responsible use of AI equivalent to citing your sources?
How is the AI prompt “write in the style of X” different from a human undertaking such a task?
What’s the difference between your summary of what happened and that of an AI? Does it matter if the AI is accurate and you’re not?
Which sin is more heinous, relying on AI to make your vision come to life in a Sora video or using ChatGPT to write your essay?
How much work can you remove from your labors and still call the job your own? Can you collaborate with AI? Can you collaborate 25 percent?
Extra credit:
Is there anything that even AI-averse people would use it for?
Are there uses for AI that people are already relying on it for without knowing it?
What is AI better at than you?
I wish I had a good way to grade this exam, relying not on right and wrong but on imagination, human consciousness, the uses of technology and other subjects that remain open to interpretation no matter how many articles are published on them. But if you really want to know the score, feel free to use an AI-generated rubric.
David Galef is a professor of English and the creative writing program director at Montclair State University. His latest book is the novel Where I Went Wrong (Regal House, 2025).
