Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter
Three years ago, schools took a side.
Within weeks of ChatGPT’s release, hard rules appeared almost overnight. AI tools were banned throughout departments. Teachers watched what seemed like an existential threat materialize in real time, and they responded the way institutions usually do under pressure: They drew a line and told everyone not to cross it.
Three years later, that line is still there. And at many places, nobody ever asked whether it should be, at least not the people most affected by it.
When I looked into how my Austin, Texas, high school’s AI policy was developed, I found that my administrators made the decision internally. There was no student committee, no open forum, no campuswide survey. The rulebook was simply handed down. In K–12 education, only Ohio and Tennessee require districts to develop and publish AI policies; when they are published, they’re often developed without proper consideration of all stakeholders, including students themselves.
It’s reasonable to counter that students are minors, that institutions need coherent governance and that not all decisions can go to a committee. But AI policy isn’t a routine curriculum adjustment. It governs what tools students are allowed to use to think, draft, research and communicate — tools that increasingly shape how knowledge is produced and evaluated outside school. Getting those rules wrong produces consequences for students.
Brittany Carr’s situation is a well-known example. In early 2023, the Liberty University student and military veteran had three assignments flagged by an AI detector. She provided her revision history and explained her process writing deeply personal essays about her cancer diagnosis, her depression and her personal recovery. It wasn’t enough. Fearing that a second accusation could cost her financial aid, she began running every essay through an AI detector herself, rewriting any sentence it marked until her writing voice felt flattened and unfamiliar. By the end of the semester, she left the university.
Carr is not alone. The same NBC News investigation found that students across the country deliberately simplified their vocabulary and avoided complex sentence patterns — not to write better, but to write less like themselves. Creative writing assignments exist to help students find their voice, which they can’t do in fear of an algorithm. Carr’s case shows a student reshaping her writing, and ultimately her education, around a software system she had no role in approving, in a policy she had no voice in developing.
Student involvement would not necessarily have guaranteed a different outcome in Carr’s case. But it might have changed the structure that enabled it. Students could have brought up concerns about relying on automated detectors without corroborating evidence. They could have described how fear of false accusations pushes students toward simpler vocabulary, safer syntax and less intellectual risk. They could have asked what procedural protections exist before a software flag becomes an academic charge.
Instead, at many institutions, enforcement architecture was built first. Conversation came later, if at all.
It doesn’t have to work this way. In Los Altos, California, high school students did more than sit in on policy meetings — they designed and ran community workshops, facilitated discussions between sixth graders and administrators, and built an AI chatbot to help other districts draft policies.
A 2024 Harvard report found that students overwhelmingly want to be part of decisions about how AI is used in their education — and that many already hold sophisticated views on its risks and potential. The fact that Los Altos made national news tells you how rarely that invitation is extended.
But there is a deeper reason students belong in these conversations: We know something policymakers don’t.
At my high school, I’ve witnessed — and experienced — a secret loop in the learning process: we use large language model tools like ChatGPT and Claude to genuinely improve learning by unraveling concepts, studying for tests and brainstorming ideas.
A few days ago, a student asked a question about a formula in my AP Physics C class — and nobody knew the answer. Another student opened his laptop and asked Claude, and after a few minutes of back-and-forth, we had completely straightened out our question, improving everyone’s understanding of how circuits worked. I used an LLM to compile notes from my Multivariable Calculus class, which helped me study and earn a near-perfect score on my test. My friend used ChatGPT to learn Java syntax for a project — not to write code, but to understand the language.
A Pew Research Center survey found that 54% of U.S. teens now use AI chatbots for schoolwork, with the most common uses being research and brainstorming — not copying and pasting answers. But that message hasn’t reached the people writing the rules. This secret loop goes completely disregarded by schools, simply because it’s easier to blanket-ban the technology altogether. The generation that grew up with these tools understands their texture in a way no outside committee can replicate.
These AI policies directly affect students’ outcomes and futures. To exclude them from the conversation is simply undemocratic.
If educational institutions are serious about preparing students for democratic citizenship, that commitment must go beyond coursework and into policy-making. The time to invite students into these critical conversations is now. Will schools treat students as subjects of policy, or as participants in it?
Did you use this article in your work?
We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how
