A baby is sleeping under a camera that monitors her breathing. A toddler is asking Alexa to play a song. A 5-year-old is sounding out words through a tablet app that adjusts to her pace. A 7-year-old’s YouTube queue is curated by an algorithm based on his preferences. AI is not arriving in the lives of young children. It is already there.
The assumption that AI primarily affects older children is wrong. For the youngest children, the consequences may be especially impactful—and the least visible. The early years—from birth to 8 years—are critical as they set the foundation for life. Brain development is greatest during this period, and parents/caregivers are critical for ensuring young children get the socio-emotional, cognitive, health/nutrition and other inputs they need. Relationships and human connection between a child and their parent/caregiver and other children are core to what it is to be human and essential for healthy development. AI can often be “invisible,” but it still shapes the way a parent/caregiver raises a child; the child cannot choose, question, consent to, or even recognize the use of technologies that are shaping their earliest experiences. This explainer illustrates the AI-embedded products that are already in young children’s lives and outlines what adults should consider before bringing any of these tools into a child’s world—whether at home, preschool, or in childcare.
Common Sense Media’s 2025 Census indicates that children ages zero to 8 average approximately two and a half hours of screen time daily—a figure that rises to nearly three and a half hours for children ages 5 to 8. Research found that YouTube, which is increasingly having more generative AI embedded, has seen daily use in the last five years increase from 24%-35% for children under 2 years and 38%-51% for children 2 to 4 years. Additionally, about six in ten parents of children 2 to 8 years old report that their kids interact with a voice assistant such as Siri or Alexa, and half say their child does this at least once a day. Research from the Boston Children’s Digital Wellness Lab found that the youngest children are also most likely to attribute human-like thoughts and emotions to technology. A booming market of AI-enabled baby monitors, smart toys (including stuffed plush toys), and educational apps means even the youngest infants can encounter AI before they can walk or talk.
What does the early childhood AI landscape look like?
New products are coming out every day, and it can be confusing for parents/caregivers and the early years workforce to know what is out there and what could be harmful or beneficial to young children. The table below shows major categories of AI-enabled products that young children currently encounter, organized by context. Age ranges reflect manufacturer guidelines and available research and should be read as approximate. In many cases—particularly for infants—the child has no direct interaction with technology; it operates in their environment, often without parental awareness of its AI features. This list is not comprehensive.
Key considerations for parents and caregivers
Most AI contact is passive and invisible for young children
Many of the most consequential AI encounters in a young child’s life happen without their knowledge: a monitor analyzing their sleep, an algorithm shaping their YouTube recommendations, a platform logging their reading errors. This differs fundamentally from a child consciously choosing to use a tool and raises distinct questions about what data is being collected, by whom, and to what end.
Technology and the market are moving faster than the research
There is currently a misalignment of timelines with technology and the market moving significantly faster than the research. In September 2025, researchers issued a formal statement cautioning that current evidence is insufficient to determine the effects of AI companion interactions on babies and toddlers. Adults in children’s lives need to remember that technology companies’ primary goals are profit and attention; this could skew design that is not rooted in science.
Relationships and human connection could be harmed
Several of the products mentioned might seem like they could make an adult or child’s life easier, but could it be at the expense of developing human relationships? This is not yet clear. For example, the smart bassinet could be helpful for parents, but the effort involved in settling a baby can itself be a bonding experience, potentially outweighing benefits offered by AI tools. Using AI to understand developmental milestones and receive parental advice may seem innocuous, but a lack of knowledge on where this advice comes from and from which cultural perspective could be cause for concern. Likewise, children focusing on robot toys could skew their views and abilities to play with human children.
AI toys present distinct safety and privacy concerns
A U.S. PIRG investigation of four leading AI toy products found instances of inappropriate content, including explicit sexual topics and guidance on accessing dangerous household objects. A few U.S. senators subsequently wrote to some toy manufacturers demanding answers on safeguards. As Dr. Dana Suskind of the University of Chicago has noted, young children lack the conceptual tools to understand what an AI companion is, making the relational dynamics of these toys particularly difficult to navigate.
Children cannot consent to data collection
Young children have no meaningful ability to understand or agree to the data practices of the products they interact with. U.S. law (COPPA) provides some protections for children under thirteen, but enforcement has been uneven, and AI-specific guidance for the earliest years remains limited.
AI raises equity and access concerns
Most AI products for young children carry subscription costs that place them out of reach for most families globally, creating a landscape where AI shapes child development unequally. These tools remain accessible to only a small percentage of families, mostly in urban areas in North America and Europe and in English, raising equity issues.
Why this matters
The tools described in this explainer are not hypothetical. They are in babies’ bedrooms, in preschool classrooms, and on family tablets right now. Understanding the landscape—what these products do, what data they collect, and what the evidence does and does not support—equips parents/caregivers to make decisions suited to their child’s development and well-being and gives the early childhood workforce the foundation to guide the families they support. For policymakers, it points to where guardrails are most urgently needed—from stronger data collection limits and age-appropriate design standards to guidelines for AI tools in early childhood settings—to protect young children’s safety, privacy, and development.
In my work, I will continue tracking new technologies, research, and policy developments in this space, building on existing Brookings research intended for parents/caregivers, the early childhood workforce, and policymakers navigating the fast-moving AI landscape. Our youngest children cannot advocate for themselves—the adults around them must.
