Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter
New York City’s Education Department unveiled its preliminary guidelines for artificial intelligence use, offering a rough road map for if and when to incorporate AI tools in school.
The guidance, released Tuesday, arrives nearly three years after a short-lived school ban on ChatGPT. It also comes in the midst of ongoing debates about student privacy, AI’s effect on student learning and development, and the role of private companies in schools. Some schools had designed their own policies as they awaited citywide guidance.
Hot button issues, like how and if students can use AI for homework assignments, or whether students can use personal AI chatbot accounts in addition to tools approved and supervised by the Education Department, are still being hashed out.
The AI Behind Flourish Microschools
City officials are asking families and educators for feedback, which will inform future versions of the guidance. The Education Department released a short survey and will also host webinars and events to answer questions and gather feedback through May 8.
“AI is here, and our responsibility is to put strong systemwide safeguards in place,” schools Chancellor Kamar Samuels wrote in an email to parents.
The early framework is structured in a “traffic light” approach: green light for approved uses, red light for prohibited cases, and yellow light cases for gray areas, which require significant oversight.
For example, brainstorming lesson plans and drafting non-critical communications fall under “green light” cases.
Two New Reports Urge ‘Human-Centered’ School AI Adoption
In “yellow light” cases, schools can use AI to find trends in student data, to generate translations for bilingual learners, or adapt materials for students with disabilities — but a trained professional must first review the outputs before it is used with students.
All decisions made about students, including grading, development of special education and 504 plans, discipline, counseling and crisis intervention, and other academic placement decisions, are strictly forbidden. These “red light” cases are not expected to change in the final playbook the city aims to release in June.
Pushback has already been fierce among parents and education advocacy groups: A petition asking the city to put a two-year pause on AI use in schools has garnered about 1,500 signatures since October. Several Community Education Councils have also passed resolutions calling for a moratorium of AI in schools.
The guidance was written by the Education Department’s AI Task Force, and informed by the city’s external AI Advisory Council, which includes education technology partners from Google, OpenAI, and other companies hoping to contract with the city’s roughly 800,000 K- 12 students.
Questions remain about student privacy and third-party AI contracts
Before schools can use AI tools in the classroom, each product must go through a data privacy and security vetting process called the Enterprise Request Management Application. The process, created in 2023, applies to all third-party technology vendors.
But AI has become ubiquitous. The Education Department’s contract with Microsoft 365 programs did not originally include AI chatbots, but now do, said Naveed Hasan, a member of the Education Department’s Data Privacy Working Group.
“Just like TikTok was unregulated until school networks blocked it, so are these free AI products,” said Hasan, whose group advised on data privacy policies prior to the AI guidance.
Schools can visit the department’s Ed Tech portal to see if a tool has already been approved; otherwise, schools must submit an application for new use.
The process, however, doesn’t yet include guidelines on how to review certain aspects of AI products, such as algorithmic bias or instructional effectiveness. Those are expected to be included in the final June version of the playbook.
The guidelines, which were shaped by federal and local laws, say personal student information can never be entered into unapproved AI tools, and under no circumstances can student information be used to make money or train AI models.
Although the general sentiment about privacy protection is clear, how to ensure it remains protected in every use is a key question that some close to the policy development say remains unfinished.
Hasan said the guidance alone can’t guarantee privacy and relying on third-party products, even approved ones, makes it difficult to know what’s secure and what’s not.
He has called on the Education Department to consider maintaining its own hardware and training its own group of AI experts instead of relying on outside companies.
AI moratorium advocates push back
The Parent Coalition for Student Privacy, one of the groups on the AI moratorium committee, said in a response Tuesday that the guidance does not address the potential long-term effects of AI use on learning and thinking.
The city has already accepted that AI will be a part of school learning before proving its value and safety for students, said Kelly Clancy, founder of Parents for AI Caution, another group on the committee.
“The city needs to have a burden of proof about why this is good,” Clancy said. “It shouldn’t just be about harm reduction, but rather why AI is better for my kids than a human-centered, traditional classroom.”
Education Department officials said proposals for new, AI-focused schools and programs — like Next Generation Technology, an “AI-focused” high school pending approval — must demonstrate how they align with the guidance’s principles.
The full preliminary guidance can be accessed here.
Chalkbeat is a nonprofit news site covering educational change in public schools.
Did you use this article in your work?
We’d love to hear how The 74’s reporting is helping educators, researchers, and policymakers. Tell us how
