Artificial intelligence has already entered K–12 classrooms, whether schools are ready for it or not. From lesson planning and grading to essay writing and research, AI tools are changing how teachers teach and how students learn. Some educators see endless possibilities for innovation, while others worry that these same tools could weaken students’ ability to think critically, write clearly, and solve problems independently.
As an educator who trains future teachers (Pedro) and one who works with community college students transitioning from high school (Enrique), we find ourselves both hopeful and uneasy about AI’s potential. Uncle and nephew, members of two generations, we are bullish and bearish at the same time, and we believe this tension reflects where K–12 education stands right now.
The Skeptic’s View (Pedro, the uncle)
AI may be impressive, but we risk letting it replace the very cognitive and social skills that schools are meant to cultivate. Many teachers already report that students use ChatGPT to finish assignments without reading the material or developing their own ideas. The time students spend thinking, drafting, and revising is an essential part of learning that these students are missing out on entirely.
New research from the University of Southern California’s Center for Generative AI and Society suggests how convenience-driven technologies like generative AI can erode essential skills. Many people can’t navigate without GPS or do even simple arithmetic in their heads. Why assume AI will be different?
AI promises efficiency, but in education, efficiency can come at a cost. When students rely on algorithms to generate answers, they lose the opportunity to wrestle with ideas, make mistakes, and build understanding through effort. Teachers know that deep learning often happens in moments of confusion or struggle.
We find ourselves both hopeful and uneasy about AI’s potential.
That’s why schools should move cautiously before fully integrating AI tools. Instead of banning them outright or embracing them wholesale, educators should design assessments that ensure students can demonstrate original thought. Oral presentations, Socratic seminars, and project-based learning allow teachers to see what students know and how they think. These methods preserve the human elements of curiosity, originality, and critical reasoning—qualities no machine can replicate.
If we’re not intentional, the “efficiency” AI offers could hollow out the learning process itself.
The Optimist’s View (Enrique, the nephew)
I share the concern about shortcuts, but I’ve also seen how AI can enhance learning when used thoughtfully. In my research on community college faculty use of generative AI, I found that teachers who frame AI as a “cognitive companion” rather than a replacement see meaningful gains in student engagement and reflection. I believe that insight applies across grade levels.
For example, some teachers ask students to write an essay draft, use AI to revise it, and then submit a reflection explaining what changed and why. Others have students use AI to brainstorm ideas, generate questions, or fact-check responses. These strategies don’t weaken critical thinking, they strengthen it. Students learn to analyze, critique, and improve their work while developing awareness of how AI tools function.
What makes this moment promising is that many K–12 teachers are already experimenting on their own. They’re discovering ways to integrate AI to personalize instruction, support English learners, and provide real-time feedback. But most are doing this without much institutional guidance or professional development.
That’s why I’ve developed a framework I call AI Pedagogical Literacy, a practical approach that helps educators understand how to integrate AI responsibly. It’s not about teaching students how to use AI tools; it’s about helping teachers design learning experiences where AI amplifies, rather than replaces, human thought. This means knowing when to use AI, how to verify its output, and how to keep human reasoning at the center of every task.
Rather than fearing AI, we should prepare teachers and students to use it wisely. The real danger isn’t the technology itself; it’s a lack of guidance and support.
Finding Common Ground
AI is here to stay, and ignoring it won’t make it go away. But integrating it into the classroom without creating guardrails could lead to a nightmare scenario where teachers defer to technology and students stop thinking for themselves. The path forward lies between fear and blind enthusiasm.
We agree on several steps K–12 schools should take right now:
- Invest in teacher training. Every educator should understand how AI works, its limitations, and its ethical implications. Teachers who are comfortable with AI will be better positioned to guide students in using it responsibly.
- Design authentic assessments. When assignments require oral defense, collaboration, or reflection, students must engage deeply with the material—AI can’t do that for them. Research suggests that assessment designs emphasizing higher-order thinking (analysis, evaluation, creation) are more resistant to AI misuse than traditional recall tasks.
- Teach dual literacies. Students need traditional skills like writing and problem-solving and new literacies that involve verifying sources, detecting bias, and prompting AI effectively.
- Foster collaboration between educators. Schools should create spaces for teachers to share AI-integrated lesson plans and discuss what works, avoiding fragmented experimentation.
We may differ in how quickly schools should adopt AI, but we share a conviction that educators, not algorithms, must determine what learning should look like in an AI age.
AI’s promise and peril are two sides of the same coin. Used carelessly, it can erode student effort and grit. Used wisely, it can open new pathways for creativity and deeper understanding. The question for K–12 educators is not whether to use AI but how to ensure it strengthens rather than diminishes what makes learning human.
