Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter
Two new reports caution that if schools make missteps implementing AI, the results could haunt them for years, locking them into a future largely written by big tech instead of those closest to kids.
The reports, both the results of small, intensive gatherings of educators, policymakers, researchers, tech officials and students last year, share a common warning: AI in schools must serve human-centered learning that doesn’t simply push for more efficiency. To do anything else risks creating a generation of young people ill-equipped for the future.
The findings come as young people say they’re turning to generative AI more than ever: A Pew Research Center survey released last week found that more than half of teens ages 13 to 17 use chatbots to search for information or get help with schoolwork. About four in ten report using AI to summarize articles, books or videos or create or edit images or videos. And about one-in-five say they use chatbots to get news.
For the first report, a group of 18 people met in July in Phoenix. Brought together by AI for Education, a training and policy organization, and Imagine Learning, a digital curriculum company, the report treats the question of how schools should view AI as a literal “Choose-Your-Own-Adventure” story: The authors lay out three possible scenarios in which educators in an imaginary school district make radically different decisions about the technology.
In the first scenario, the district retreats from AI altogether after a data breach, abandoning a previously created “Innovation Lab,” while teachers return to traditional instruction and testing.
The restrictions soon backfire. Students continue using AI at home, but without guidance, take shortcuts on homework, developing a kind of survival mechanism they privately call “school brain.” Seeing how irrelevant most lessons are, they do just enough to get by, offloading thinking to AI tools. When tested, they show shallow understanding and poor foundational skills.
Test scores plummet, college acceptances drop and 40% of graduates land on academic probation. Employers report that graduates can neither work independently nor collaborate effectively with AI. Teachers begin departing in waves.
Retreating from AI, the authors find, creates “the worst of both worlds” — students who can neither think independently nor use AI effectively.
Google DeepMind’s Learnings in Developing an AI Tutor
In the second scenario, the district, facing competition from AI-driven private schools, goes all-in, adopting a comprehensive, district-wide AI platform for automated instruction. The platform promises greater efficiency via AI tutors, automated grading and behavioral monitoring. And while it initially lowers costs and produces higher test scores, teachers find that students are soon gaming the algorithms rather than learning. The auto-grader penalizes valid but unconventional answers, while multilingual learners are unfairly penalized for non-standard answers on tests.
Teachers find themselves defending grades they didn’t assign and can’t fully explain, while families that challenge grades are stopped by “proprietary algorithms” that even administrators can’t review. The system delivers “a black box” that removes human judgment: “Students could feel the difference between being evaluated by an algorithm and being understood by a teacher.”
Before long, graduates struggle with collaboration, creativity and adaptability — skills employers and colleges increasingly value.
In the report’s third choice, the district, via its Innovation Lab, redesigns its offerings to prepare students for an AI-driven future while keeping a focus on “human-centered” education. Rather than focusing solely on technology, it develops a “graduate profile” that emphasizes critical thinking, ethical reasoning and human-AI collaboration, among other indicators.
Is AI in Schools Promising or Overhyped? Potentially Both, New Reports Suggest
The lab shifts to flexible, project-based learning, and students soon learn to use AI as a tool that supports but doesn’t replace their thinking. While the district continues to satisfy state accountability through testing, it also pursues federal innovation grants to fund portfolio-based assessment systems based on the graduate profile.
All is not rosy, though. The redesign is expensive and hard on teachers. Enrollment suffers as political resistance builds steam. But graduates soon demonstrate an ability to critically evaluate AI tools, adapt quickly to workplace changes and develop a “learn how to learn” mindset that serves them in the long term.
Alumni soon report that their “robust” portfolios of work are a huge advantage in competitive job markets, and employers say they are the only new hires who critically evaluate AI’s recommendations, spotting hallucinations and biases.
Amanda Bickerstaff, AI for Education’s co-founder and CEO, said the first two scenarios are what educators at the July convening said they were seeing most often in schools.
“There was a strong recognition from everyone, including the students, the two high schoolers, that the traditional methods have not worked … for decades,” she said. “But it feels safer.”
As for going “all in” on AI, she said, that point of view is inevitable in many places, given current aggressive efforts of tech giants like Google who are “pushing into schools,” going direct to students.
“There’s this real pressure from both ed tech and AI itself, because it’s such a big market that’s never really been figured out,” she said.
Amanda Bickerstaff
What makes it worse is that few tech firms employ enough teachers to ensure that their products work well for students. “They don’t have hundreds of education people,” Bickerstaff said. Their education teams are “fractions of their headcount, working on tools that are instantly in students’ hands.”
The third path, in which the district redesigns its offerings, is “the most human” of the three, she said, and the most intentional. “The third path is the one that trusts humans and educators and students and families,” Bickerstaff said.
‘Explicitly ambidextrous’ schooling
Another paper by the Center on Reinventing Public Education, a think tank at Arizona State University, also calls for a new approach to schools’ decisions about AI, saying the technology “should be a catalyst for human-centered learning, not a replacement.”
The CRPE report, the result of another gathering in November, asserts that schools are at a pivotal moment. Their AI policies could go one of two ways: They can either entrench outdated educational models or help bring about a fundamental transformation of schooling.
“One of the big things that came out of those discussions was a strong feeling among the group that AI is currently being thought of as a productivity tool for the education system that we have, rather than a tool to radically improve teaching and learning and outcomes for kids,” said Robin Lake, CRPE’s executive director.
Four Takeaways from New Report on AI’s Risks in Education
During its meeting, the group repeatedly discussed an “efficiency paradox” that could make schools faster and cheaper without addressing students’ actual needs. To protect against it, they call for a more coherent, human-centered approach that is “explicitly ambidextrous,” improving current practices while intentionally building toward new learning models.
The problem with AI, the report alleges, is that it could simply improve the efficiency of outdated educational models. It notes that the Scantron, a time-saving testing technology, for decades reinforced low-level standardized assessments, often at the expense of improved learning.
Instead of using AI as a new kind of Scantron, it says, AI could make way for several innovations, including new assessments that capture real-time performance as students work. It could even measure key non-academic indicators such as belonging, confidence, curiosity and relationship quality.
Robin Lake
Lake said the report’s idea of an “ambidextrous” approach to AI came from an acknowledgement by the group that “we have to attend to the kids who are in our schools right now — and the teachers,” she said. “We have to use whatever technologies are available to make things better, but we also have to make investments in big, really different whole-school designs.”
Those could include not just better assessments but ways to help teachers provide “rigorous personalization grounded in the science of learning.”
What Public Schools and Parents Can Learn from a $40,000-a-Year Private School
Districts could create classrooms with multiple adults working in teams based on their expertise. And AI could enable schools to match students to internships and other experiences, handling administrative tasks so humans can focus on relationships.
Lake said the group that met in November kept coming back to one idea: Keeping an eye on both the future of school and the reality of the schools we already have.
“A lot of times when we have these conversations about AI and the future of schooling, it feels very floaty and abstract,” she said. “So I really appreciated that the fellows had a vision to connect the here-and-now to what kids need to know and [should] be able to do in the future. That feels really important for us all right now.”
Did you use this article in your work?
We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how
