I will never forget the day when I first began thinking in French. I was driving south on Pacific Coast Highway in Long Beach, mentally reviewing the work I needed to complete that afternoon as a research assistant in my university’s French Department.
It took me a moment to realize that for the first time in the six years I had been studying the language I was no longer translating to English in my head. I was thinking in French. I celebrated the moment: after six long years, I was finally — and belatedly — fluent.
That moment has particular relevance for me today in my work on the use of AI in education. I write about the topic multiple times per week, consult with various startups in the U.S., advise several education organizations in China, and experiment with AI tools for most of the day.
Surprisingly, despite my immersion in the topic, clarity remains elusive, especially when it comes to coherent policy in schools. Part of the problem is our inability to define what constitutes AI literacy, definitions of which seem to change on a monthly basis as the technology unleashes new functionality. I struggle with understanding the difference between AI literacy and AI fluency, the label of choice at recent ed tech conferences.
Join me in a brief review of the dominant AI literacy frameworks. We’ll use that analysis to determine if we need to switch our policy and practice to developing AI fluency, or if we can in fact maintain course and stick with AI literacy.
The Dominant AI Literacy Frameworks
Globally, the OECD’s “Empowering Learners for the Age of AI: An AI Literacy Framework for Primary and Secondary Education” dominates this topic. It serves as the basis for PISA 2029’s Media & AI Literacy assessment. This assessment is classified as a standard PISA “innovative domain” — a designation signaling full international implementation, not just a small‑scale pilot. Neither the U.S. nor China has confirmed its participation yet.
OECD organizes AI literacy into four interconnected domains that cover using, understanding, creating with, and reflecting on AI, with strong emphasis on ethical, cross‑curricular integration and subject links.
In the U.S., Digital Promise’s AI Literacy Framework defines AI literacy in terms of AI Literacy Practices, Core Values, Modes of Engagement, and Types of Use (e.g., Understand, Evaluate, Use; Interact, Create, Problem Solve), centering human judgment and justice.
This framework is designed for practical use in K–12, with “look fors,” meaning actionable practices and implementation strategies.
AI4K12 developed the “Five Big Ideas in AI” in partnership with the Association for the Advancement of Artificial Intelligence and the Computer Science Teachers Association. It is not a full “literacy” framework in the OECD sense but is globally influential in K–12. The Five Big Ideas (Perception; Representation & Reasoning; Learning; Natural Interaction; Societal Impact) are in fact explicitly referenced as a source for the OECD framework.
And finally, SRI’s “Promoting AI Literacy in K–12: Components, Challenges, and Opportunities” is also highly regarded. It is a research‑based synthesis that identifies three interrelated areas of knowledge as “pillars” of K–12 AI literacy. SRI offers a developmental learning progression that is widely cited in policy and framework efforts.
Most of the major K–12 AI literacy frameworks I’ve examined use the term “AI literacy” as their core construct and do not formally define or operationalize a separate construct of “AI fluency.
Fluency vs. Literacy
Researchers studying this topic propose an explicit distinction: AI literacy = understanding and evaluating AI; AI fluency = a higher‑order competency built on literacy that emphasizes creation, innovation, and adaptation with AI.
Proponents of the distinction argue that using generative AI to create new work is the defining characteristic of fluency. This echoes prior conceptions of Information and Communications Technology fluency that focus on production.
Research by Rogers and Carbonaro defines AI fluency as “moving from understanding to creating” with AI, and identifies “creation” as the defining characteristic of fluency. Their work directly parallels language frameworks where fluency is associated with productive, generative skills (speaking, writing) and flexible communicative use.
None of the frameworks above explicitly draw this distinction. However, if you examine those descriptions you will notice that both OECD and Digital Promise mention creation as a primary component of AI literacy.
This research debate is gaining traction outside the confines of academia. Both the ed policy and instructional practitioner crowds have caught wind of the distinction between AI literacy and fluency. Commentators have begun to describe the relationship as a progression rather than competing concepts.
This discussion reminds me of an earlier take (pre-AI) on the importance of developing a generation of students who become producers of content rather than passive consumers.
MediaSmarts Digital Literacy framework (2015) stated that “Making and Remixing skills enable students to make media and use existing content for their own purposes.” NAMLE’s Core Principles of Media Literacy Education stated in 2007 that “Media literacy education expands the concept of literacy (i.e., reading and writing) to include all forms of media and integrates multiple literacies in developing mindful media consumers and creators.”
So what is our goal in the development of AI literacy? Do we want to develop a generation of learners who understand how AI works and are able to use the tools? Or, do we want to develop learners who can use their AI knowledge and skills to create ethical and effective content that benefits both themselves and their community?
Final Thoughts
The most recent survey data indicates that around 12% of the U.S. workforce uses AI to complete their duties. I think that statistic points clearly to the idea that we are in the AI literacy stage of usage. Despite eye-popping metrics (800 million weekly users of ChatGPT) it seems that people are still trying to figure out the basics. Otherwise, a far higher percentage of workers would be using AI in their jobs. Remember our earlier definition of AI fluency: A higher‑order competency that emphasizes creation, innovation, and adaptation.
The usage data is linked, at least in the popular press, with the oft-repeated mantra that you will not lose your job to AI; you will lose it to a human who knows how to use AI. I will extend that thought to a related conclusion: You will certainly lose your job to a competitor who is fluent in AI usage.
Surveys also indicate that the mainstream student pattern of AI usage is still heavily text‑centric — explanations, summaries, brainstorming, and writing support — with more creative, multimodal “producer” uses starting to appear but not yet dominant in the data. Fluency awaits.
I am, as you can tell, placing my money on the folks who view AI literacy as the first stage in a progression to AI fluency. It is a necessary step, but a first step.
Once again, I find myself in the position of advocating that we redo our early thinking, as evidenced in the AI literacy frameworks I cited above. I suggest we adopt a tried-and-true educational model that governs our thinking around curriculum frameworks: A scope and sequence. The final outcome of that scope and sequence should be AI fluency.
It took me six years to become fluent in French. AI will not be so patient — we have far less time to develop fluency in our learners
