Schools are suffused in screens, and the backlash has begun. Dozens of states are banning phones even as district leaders champion new action plans for AI. What’s the right way forward? In his new book, The Digital Delusion: How Classroom Technology Harms Our Kids’ Learning—And How to Help Them Thrive Again, neuroscientist Jared Cooney Horvath dives into the heart of the debate. A specialist in brain development, Horvath is a former K-12 teacher who’s worked at Harvard University, Harvard Medical School, and the University of Melbourne. Earlier this year, he testified before the U.S. Senate on the problems with ed tech; his testimony has already been viewed more than 2 million times. I wanted to hear more about his provocative critique. Here’s what he had to say.
—Rick
Rick: Jared, you started out as a teacher. How’d you wind up in neuroscience?
Jared: I started teaching in the early 2000s, during what was often called the “decade of the brain.” At the time, popular science books and educational programs were all talking about “brain-based learning.” To a young teacher, that sounded like the next inevitable step for improving my practice—so I decided to dive deep, learn about the brain, and attempt to bring that back to the classroom. That led me to completing a Ph.D. in cognitive neuroscience at the University of Melbourne in Australia and spending the next 10 years in academia before writing my book.
Rick: When it comes to neuroscience and ed tech, what’s the one big thing educators need to know?
Jared: That human beings are not just brains. Of course, the brain is important, but it isn’t entirely who we are. You are no more your brain than you are your heart, your lungs, your spleen, or your toes. You are all of these systems working together, but you are not located in any single one of them. Once we recognize that humans are embodied beings and not just central processors, then the more human aspects of learning suddenly make much more sense. Things like relationships, emotions, and context are as fundamental to learning as gray matter.
Rick: In The Digital Delusion, you write, “Our children are less cognitively capable than we were at their age.” That’s one heck of a statement. Can you explain?
Jared: The scientific measurement of cognitive skills—things like memory, attention, and executive function—really took off around the start of the 20th century. From that point forward, performance on these measures steadily increased decade after decade. Psychologists summarized this trend using IQ scores, and, on average, each generation gained about 6 IQ points compared with the one before. That was known as the Flynn effect. In the last 15 years or so, however, the Flynn effect has reversed. Across many measures—from literacy and numeracy to basic working memory and attention—performance has declined among the youngest generation. Although there are likely several contributing factors, when you look closely at the timing of the reversal, one development stands out: the widespread integration of digital technologies into nearly every aspect of young people’s lives, including school, in the early 2010s.
Rick: You report that, on PISA assessments, students who use computers more than six hours a day score 66 points lower than nonusers. How big a deal is that? And what do you think is going on?
Jared: On the PISA scale, 66 points is a very large difference, roughly two-thirds of a standard deviation. In practical terms, that’s equivalent to dropping from the 50th to the 24th percentile, or roughly two letter grades. What’s particularly striking is that this pattern holds across every country tested by PISA. My interpretation is that we’ve gradually outsourced significant parts of thinking and learning to digital tools. When adults rely on Google Maps for navigation, their navigational skills fade. The same principle applies in education: When students rely on software to generate, process, and organize information, their cognitive skills and learning begin to fade as well.
Rick: How much time are American students actually spending on screens in school?
Jared: It depends somewhat on the survey you look at, but most suggest that more than half of students spend between one and four hours per day on computers during class time, and roughly a quarter spend more than four hours per day on screens while at school.
Rick: For your book, you synthesized 398 meta-analyses covering more than 21,000 studies and found that ed tech has an effect size of +0.29 standard deviations on student learning. That means ed tech is good for learning, no?
Jared: In most fields, the answer would be yes. In education research, not really. Across more than 350,000 effect sizes reported in the educational literature prior to 2024, over 95% were positive. In other words, basically every intervention appears to help students learn. To account for this, education researchers don’t simply ask whether something had a positive effect. We compare the effect size to the amount of learning students would typically achieve over time, regardless of the intervention. That benchmark falls somewhere between 0.40 and 0.50. So, when we look at the average impact of ed tech across thousands of studies—about 0.29—it falls below what we would expect from an intervention truly worth scaling across classrooms.
Rick: In the studies you examined, were there any interventions you deemed beneficial?
Jared: There were two contexts in which ed tech appears to show meaningful benefits. The first is intelligent tutoring systems, where students repeatedly practice adaptive questions to strengthen specific skills. The second is remediation for students with learning disorders, where similarly structured practice can help build foundational abilities. In both instances, the benefit comes from focused, repetitive practice. But these findings come with two important caveats. First, these approaches tend to work only in narrow, well-defined domains—usually subjects with clear right and wrong answers. Second, skills developed on these programs do not always transfer easily into the real world. Thus, teachers still need to help students practice them outside the digital environment.
Rick: What do we know about one-to-one devices, in particular? Are they a good investment?
Jared: Not according to the data I’ve seen. In fact, one-to-one laptop programs tend to produce some of the weakest results in education research, with meta-analyses reporting average effect sizes around 0.12. There are many reasons for this, but one of the most obvious is that computers are distracting. When you give someone constant access to the internet, it’s very difficult for them to stay focused on the task at hand. We certainly wouldn’t expect most adults to maintain perfect discipline under these conditions, as anyone who has watched colleagues mindlessly scroll during meetings can attest. So, it’s unrealistic to assume children in school would be able to.
Rick: There’s been much discussion of screen-based vs. paper-based reading and note-taking. Any insight into why it might matter?
Jared: The advantages of paper-based reading, especially for informational or expository text, are incredibly strong, and we see similar patterns with handwritten note-taking. Paper and pencil simply align much better than screens do with the way human cognition processes, organizes, and remembers information. I won’t dive into the neuroscience details here, but this pattern is unlikely to disappear so long as human biology remains the same. From a practical standpoint, educators should consider using physical textbooks and notebooks whenever possible. If schools changed nothing else except shifting reading and note-taking back to paper, they would likely see rapid and meaningful improvements in how students think and learn.
Rick: In the book, you point to several reasons why tech could hinder learning, including the role of attention, empathy, and transfer. Can you explain?
Jared: We don’t have the space here to walk through all the mechanisms, but those three elements—attention, empathy, and transfer—are central to deep human learning. Unfortunately, digital technologies tend to undermine each of them: They fragment attention, remove many of the biological cues for empathy, and narrow learning to manufactured contexts that don’t easily transfer beyond the screen. Human learning systems have evolved over hundreds of thousands of years to learn from other human beings, and that hasn’t fundamentally changed in the last decade or two. We can try to work around these limitations, but the learning that occurs without strong attention, social resonance, and transfer tends to be shallower and less durable.
Rick: There’s been a lot of enthusiasm in ed-tech circles about the promise of gamification. You’re dubious. Why?
Jared: Engagement is not the same thing as learning. Some level of engagement is necessary for learning, but it isn’t the goal itself. Gamification gets this backward. It prioritizes engagement by drawing attention toward elements of the game. Students become focused on points, badges, and game mechanics rather than the underlying concepts. Many readers of a certain age likely remember how to play the game Oregon Trail, yet many of us struggle to remember much about the historical event itself. The game captured attention, but it did so by directing attention toward the mechanics of the game rather than the historical content.
Rick: In some quarters, there’s immense optimism when it comes to AI tutors. What’s your take?
Jared: In very narrow domains, AI tutors might be useful, so long as teachers explicitly help students transfer skills into real-world contexts. But we should be realistic about the comparison. Can “smart” AI tutors outperform simpler, “dumb” digital tutors? Given issues like hallucinations, excessive support, and unreliable training, early evidence suggests not yet. Which leads to a better question: Can “dumb” digital tutors outperform a skilled human teacher? On that front, the evidence isn’t even close. The answer is no. So, if we’re deciding where to invest our time, money, and resources, I know where I’d place my bet.
Rick: We frequently hear that “AI changes everything.” Given that, do you think the findings you cite from past ed tech will still apply in the age of AI?
Jared: The first meta-analysis of digital technology in education, published in 1977, reported an effect size of about 0.29. Nearly sixty years and tens of thousands of additional studies later, that average effect size is still about 0.29. Think about what’s changed since 1977: personal computing, the internet, cloud computing. If all those developments didn’t significantly move the needle, it suggests the limiting factor may be human learning itself. If it’s the case that our cognitive systems have certain biological characteristics that shape how learning works, then simply introducing more advanced versions of the same tools are unlikely to fundamentally change the equation.
Rick: A high school principal might say, “I hear you, but we have to prepare students for the workforce. That means being ready to use AI.” What do you think?
Jared: My undergraduate degree was in film production. We weren’t allowed to touch any film equipment until our junior year. Naturally, we complained. One of our instructors explained to us that “An idiot with a camera is still an idiot.” What he meant was that tools don’t create expertise. The same principle applies to AI. A better strategy is to focus on helping students think, learn, and build deep knowledge. Once they develop expertise, they can bring something meaningful to tools like AI and use them well. In the end, a tool is only as powerful as the knowledge you bring to it. I know how to swing a hammer, but I don’t understand construction or structural design. For this reason, a skilled carpenter will always do more with that hammer than I can—not because they know the tool better but because they know the craft better.
Rick: If you’ve one piece of advice for educators and educational leaders when it comes to ed tech, what is it?
Jared: Buy a printer.
This conversation has been edited for length and clarity.
