Listen to the article
This audio is auto-generated. Please let us know if you have feedback.
Dive Brief:
- House lawmakers shared bipartisan concerns over the risks of students using artificial intelligence — from overreliance on the technology to security of student data — during a House Committee on Education and Workforce hearing on Wednesday.
- Whether and how AI is regulated and safeguarded in the K-12 classroom at the federal level continues to stir debate. Democrats at the hearing said more guardrails are necessary, but the Trump administration has made it harder to add those through executive orders aiming to block state-level regulations and its efforts to dismantle the U.S. Department of Education.
- Republicans, however, cautioned against rushing new regulations on AI to make sure innovation in education and the workforce isn’t stymied.
Dive Insight:
The House hearing — the first in a series to be held by the Education and Workforce Committee — came a month after President Donald Trump signed an executive order calling for the preemption of state laws regulating AI with exceptions for child safety protections.
During the hearing’s opening remarks, the committee’s ranking member, Rep. Bobby Scott, D-Va, said Congress should not stand idly by while the Trump administration “may be ingratiating itself to big tech CEOs and preventing states from protecting Americans against” the dangers of AI.
Instead, Scott said, Congress should take an active role in developing thoughtful regulations to “balance protecting students, workers and families” while also “fostering economic growth.”
The ability to study and regulate AI’s impacts on education has been hindered under the Trump administration, Scott added, through the shuttering of the Education Department’s Office of Educational Technology, federal funding cuts at the Institute of Educational Sciences, and attempts to significantly reduce staffing at the Office for Civil Rights.
At the same time, the Trump administration is strongly encouraging schools to integrate AI tools in the classroom. Committee Chair Tim Walberg, R-Mich, praised the administration’s initiatives to support AI innovation in his opening statement.
For Congress, Walberg said, “the goal should not be to rush into sweeping new rules and regulations, but to ensure schools, employers and training providers can keep pace with innovation while maintaining trust and prioritizing safety and privacy.”
Some hearing witnesses also called for more transparency and guardrails for ed tech companies that roll out AI tools for students.
Because a lot of ed tech products lack transparency about their AI models, it’s more difficult for teachers and school administrators to make informed decisions about what AI tools to use in the classroom, said Alexandra Reeve Givens, president and CEO at the Center for Democracy & Technology.
Key questions these companies need to publicly answer — but won’t typically disclose — she said, should include whether their tools are grounded in learning science and whether the tools have been tested for bias. “Do they have appropriate guardrails for use by young people? What are their security and privacy protections?” Reeve Givens asked.
Adeel Khan, founder and CEO of MagicSchool AI, also said in his testimony that shared standards and guardrails for AI tools in the classroom are necessary to protect students and understand which tools actually work.
While AI education policy is primarily driven by state and local initiatives, Khan said that “the most constructive federal role is to support capacity and protections for children while investing in educator training, evidence building, procurement guidance and funding so districts can adopt responsibly.”
The Brookings Institution also released a report Wednesday on AI in K-12 schools based on its analysis of over 400 research articles and hundreds of interviews with education stakeholders.
The institution’s report warns that AI’s risks currently outweigh its benefits for students. AI can threaten students in cognitive, emotional and social ways, the report said.
To mitigate those risks, Brookings recommends a framework for K-12 as it continues to implement AI:
- Teachers and students should be trained on when to instruct and learn with and without AI. The technology should also be used with evidence-based practices that engage students in deeper learning.
- There needs to be holistic AI literacy that develops an understanding about AI’s capabilities, limitations and implications. Educators must also have robust professional development to use AI, and there should be clear plans for ethically using the technology while expanding equitable access to those tools in school communities.
- Technology companies, governments, education systems, teachers and parents need to promote ethical and trustworthy design in AI tools as well as responsible regulatory frameworks to protect students.
