This article is part of the collection: Teaching Tech: Navigating Learning and AI in the Industrial Revolution.
A fourth-grade teacher asked a simple question:
“What can I actually use this for in math?”
This teacher captured the broader moment in education. Over the past several years, schools have been urged to respond to the rapid emergence of generative AI tools such as ChatGPT with limited information and a lot of hype and horror stories. Some have framed the technology as potentially transformative for teaching and learning, while others claim the opposite. Yet in many classrooms, adoption has been slower and more selective than the surrounding hype might suggest.
That hesitation is often interpreted as resistance to innovation, but conversations with educators suggest a different interpretation. In many cases, teachers behave as experts in most fields do when encountering a new technology, evaluating whether it solves a real problem. When professionals encounter a tool that is widely marketed but still evolving, they ask a basic question: What does this actually help me do better?
For many educators, that question remains unresolved when it comes to classroom instruction, and that’s what our research project aimed to answer: What are teachers experiencing with generative AI in their classrooms?
In fall 2024, EdSurge researchers facilitated discussions between a group of 17 teachers from around the world. We convened a group of third to 12th grade teachers, and some of them designed and delivered their own lesson plans, either teaching with or about AI.
Overall, our participants’ responses reflect a few major themes, with the most prominent sentiment being an air of indifference. In particular, a fourth grade math teacher participant attempted to use generative AI in her instruction. However, before adoption, she asked how AI could help her elementary students learn math. Her question captured what several participants were thinking, aligning with 2024 data from the Pew Research Center that shows educators were split on whether student AI use was more harmful than helpful.
A Technology Arriving Faster Than Schools Can Unpack
A high school computer science teacher from Georgia describes her fears about generative AI’s widespread push into classrooms:
One of my biggest fears is actually Arthur C. Clarke’s rule: any sufficiently advanced technology is indistinguishable from magic…we have students, parents, and teachers looking at AI as if it’s magic.
A high school library media specialist from New York described the same tension from a different angle:
There’s a fear about not being able to keep up with how things progress…the new tools and the impact it has on education.
Schools typically adopt new technologies through deliberate cycles of experimentation, professional development and evaluation. Generative AI has entered classrooms through a different pathway. Consumer tools became available to teachers and students simultaneously, often before schools had developed policies or instructional frameworks for using them.
The result is a situation in which educators encounter the technology while they are still trying to understand its implications.
Where AI Is Already Providing Value
In conversations with teachers, the pattern that appears consistently is a classic user design case. The most immediate use cases for generative AI have little to do with student learning. Instead, an engineering and computer science teacher in New Jersey addressed workload:
I have a running discussion with some of my colleagues about how to use AI to lesson plan. I use it routinely to lesson plan. I don’t really use the lessons, but we have to produce all this stuff for admin that no one reads… AI will just roll it off.
Another teacher described similar experimentation among colleagues:
It’s really great that so many people have kind of scratched the surface and are using it to support their productivity and efficiency… lesson planning and newsletters and stuff like that.
These examples reflect a pattern seen across many professions: Generative AI is particularly effective at drafting, summarizing and generating text. In contexts where professionals face time pressure and administrative demands, those capabilities can be immediately useful.
Teachers experience those same pressures. Beyond instruction, many juggle grading, lesson planning, parent communication, extracurricular supervision and administrative reporting. In that environment, a chatbot that helps compress routine tasks can feel genuinely helpful.
Recent research, as well as national survey data from RAND’s American Educator Panels, suggests that teachers are adopting generative AI primarily as a productivity tool rather than a core instructional technology, a pattern that mirrors how educators in this study described their own early experimentation.
However, instructional discretion is different from a teacher’s administrative workload.
The Instructional Use Case Remains Unclear
When teachers consider introducing AI tools to students during class time, the calculations they make change. The relevant question becomes: What student learning problem does this tool solve? Many educators are still trying to answer this question, even after several years of exposure to generative AI in some capacity.
Some teachers are experimenting with AI in limited ways, such as using it as a revision partner in writing. A science teacher from Guam said:
Students write a first draft and then feed it into ChatGPT for a second draft… but I push them not to use it for research.
Others are designing lessons where the technology itself becomes the subject of inquiry. A high school special education teacher in New York shared how she removes the veil from the magic of chatbots.
We purposely trained [a chatbot] wrong, so students could understand the data is only as good as how and who trains it.
Learning science research suggests that students benefit most when technology supports reflection and revision, rather than replacing the productive struggle of critical thinking and problem solving, a principle that many teachers in this study have applied. In these cases, AI becomes a tool that students analyze and critique. The participants do not attribute AI as a source of authoritative knowledge.
AI Literacy as a Practical Classroom Entry Point
Many teachers see the most promising instructional opportunity in AI literacy, as it may feel most appropriate to teach students about the tools they’re hearing about and encountering daily. International guidance from the United Nations Educational, Scientific and Cultural Organization (UNESCO) and the Organisation for Economic Co-operation and Development (OECD) increasingly frames AI literacy as a foundational skill for students, encouraging schools to help young people understand how algorithmic systems generate information, rather than incorporating AI tools into everyday classroom tasks.
Students already live in environments shaped by algorithmically designed systems, from social media feeds to recommendation engines. Generative AI introduces another layer to that ecosystem.
An elementary teacher from New York state describes focusing on helping students understand how these systems produce information and where they fail:
For me it starts with literacy — [teaching] students how to prompt, and then how to fact-check the information that’s generated to make sure there’s no bias in it.
A middle school teacher from New York uses simple analogies to illustrate how machine learning systems work:
We used an exercise about making the best peanut butter and jelly sandwich. The ingredients were the dataset, the procedure was the algorithm, and the output depended on how it was designed.
These lessons treat AI less as a productivity tool and more as a window into how digital systems generate knowledge.
Hallucinations, Bias and the Question of Trust
Teachers also raised consistent concerns about the reliability of generative AI outputs. An elementary library media specialist from New York said:
You ask ChatGPT to write a paper on something and it makes something up totally imaginary.
To illustrate the risks, some educators point to real-world examples. A high school French teacher shared:
I tried ChatGPT. I think it’s very useful if you know your content very well. IIf you don’t know your content, it’s hard to tell whether or not it’s accurate.
Others connect these issues to broader discussions about algorithmic bias, explaining why they fear that students will become reliant on these tools. A high school computer science teacher in New Jersey shares her concerns about the increased use of AI by students. She works at a school with large populations of African American, Latino and Black newcomer families from African and Caribbean countries:
When we talk about bias, we look at hiring data and incarceration data… and facial recognition systems where error rates vary depending on who the system is trying to recognize.
In these contexts, AI becomes less a tool for answering questions and more a case study of how technological systems shape information.
The “Air of Indifference”
Taken together, these conversations reveal a stance that is not often captured in public discussions of AI in schools. What initially appeared to be an insignificant factor in keeping teachers interested in robust discussions about AI turned out to be a prominent theme aligned with both existing and emerging research.
By and large, teachers are not rejecting the technology. But they are also not reorganizing their classrooms around AI.
Instead, many are adopting a posture that might be described as pragmatic indifference:
“I use it for lesson planning… but I don’t really use the lessons.”
“I push students not to use it for research.”
In other words, teachers are using AI where it clearly saves time while maintaining boundaries around core learning tasks. This posture reflects professional judgment, rather than resistance to inevitable technological innovation.
Schools exist partly to create conditions in which students practice complex cognitive work, such as deep reading, methodical writing, reasoning through problems and evaluating evidence. If a tool primarily reduces the need to perform that work, teachers have reason to question whether it advances or undermines learning.
And that brings us back to the fourth-grade teacher’s question: What can I use this for with fourth-grade math?
If the instructional use case for AI remains unclear, what should students be learning instead?
That question leads to a deeper conversation about the kinds of skills that remain valuable even as technologies change.
