There is a photograph of a three-year-old girl, taken at her birthday party in 2019, that her mother posted to Instagram with the caption "my whole world 🎂." In the background of that image, barely visible, is an Amazon Echo on the kitchen counter. There is a smart TV in the living room behind the party guests. The girl's pediatrician had recently begun recommending an app to track her developmental milestones. The toys in the corner of the frame included two that connected to Wi-Fi.
By the time that child is old enough to hold a smartphone, she will have spent most of her conscious life in the presence of systems that were recording, inferring, and responding to her. She will not know what those systems decided about her. Neither will her mother.
We are raising the first generation of children who have never known a world without artificial intelligence — and most parents are only beginning to reckon with what that means.
The Third Parent
For much of human history, raising a child meant managing a relatively legible set of influences: family, school, neighbors, friends, the occasional television set. Parents could observe, intervene, redirect. The transmission of values was slow and visible enough to be contested.
The last fifteen years have broken this model. AI systems — in devices, platforms, apps, and services — have inserted themselves into every stage of a child's development with a depth and speed that traditional institutions have not yet begun to match. They are present in the classroom, at the dinner table, in the bedroom at midnight. They tailor the information a child receives, the entertainment they consume, the social comparisons they make, and the attention they pay. And they do all of this invisibly, by design, in ways that even the engineers who built them struggle to fully explain.
This is not a metaphor. Parenting researchers now use the phrase "algorithmic parenting" with clinical seriousness to describe the degree to which AI systems functionally co-raise children without holding any of the accountability that the role implies.
What Is Already in Your Home
The presence of AI in family life is not an event that is coming. It is a condition that has already arrived.
If you have a child in school, there is a near-certain probability that they interact daily with AI-powered learning tools — platforms that track their reading pace, flag their errors, and adapt what they see based on predicted difficulty. A 2024 study by Internet Safety Labs found that the average educational app with trackers forwards data to 6.7 different data broker companies every time a student logs in — not to improve the child's education, but to feed a commercial ecosystem that has determined children's behavioral data is worth billions.
If your child has a smartphone, they are spending an average of eight hours per day in the presence of recommendation engines designed by teams of engineers whose explicit goal is to maximize the time your child spends on their platforms. Not the quality of that time. The quantity of it. At TikTok, internal research identified the precise threshold at which a user "is likely to become addicted to the platform": 260 videos viewed. In typical usage, that threshold is crossed in under 35 minutes.
If your child has ever spoken near a smart speaker, played with a connected toy, or used a gaming console linked to the internet, behavioral data about them — their interests, their patterns, their vulnerabilities — has been collected, stored, and in most cases sold.
What makes this moment historically unusual is not that children face new dangers. Every generation of parents has worried about new dangers. What is unusual is the gap — wide and widening — between the sophistication of the systems observing children and the awareness of the families those children belong to.
The Consent Problem
In November 2024, the U.S. Department of Justice and the Federal Trade Commission sued TikTok and its parent company ByteDance for "flagrantly" violating children's privacy law. The complaint included a detail that deserves to be read slowly: TikTok's human reviewers spent an average of five to seven seconds reviewing each account to determine whether it belonged to a child. Five to seven seconds, for a decision about whether to apply even the baseline protections that federal law requires.
This is not a story about rogue actors. It is a story about incentives. The data of children — their behavioral profiles, their attention, their developmental vulnerabilities — is commercially valuable in ways that have created a structural incentive to collect as much of it as possible, as early as possible, with as little disclosure as the law requires.
By the time a child is old enough to read a terms of service agreement, they have already been profiled for years. Research estimates that 92 percent of American toddlers have a digital footprint before their second birthday. Nearly a quarter of children in the developed world have an online presence before they are born — through ultrasound images posted to social media, whose platforms' facial recognition systems can match those early photographs to the child's face across decades of future images.
The concept of consent — the bedrock of every privacy framework — becomes almost absurd against this timeline. A child cannot consent to data collection that begins in the womb. A parent cannot meaningfully consent to systems they cannot see.
Why This Series Exists
The ten pieces that follow this one are an attempt at something specific: not alarm, but orientation. Parents who are frightened but confused make poor decisions. So do parents who are dismissive because the subject seems too technical or too large.
What the research actually shows — and we will go deep into the research — is that the risks facing children in AI-saturated environments are real, serious, and unevenly understood. They include the invisible data collection that begins before a child's first birthday and builds a commercial profile that will follow them into adulthood. They include recommendation engines that can steer vulnerable teenagers toward self-harm content with the precision of a guided system. They include a new wave of AI-enabled predation — deepfakes, AI grooming chatbots, voice cloning scams — that would have been science fiction a decade ago.
But the research also shows that these risks are not inevitable. The families that navigate this environment most successfully are not the ones that have banished technology from their homes. They are the ones that have learned how the systems work, talk about them honestly with their children, and make deliberate rather than default choices about what they allow in.
The goal of this series is to give parents the knowledge they need to do exactly that.
In the next piece, we examine the most invisible part of the story: what AI systems already know about your child, how they built that knowledge, and what they do with it. It begins — as so many of these stories do — earlier than you would expect.
This is Part 1 of "Raising Children in the Age of Intelligent Machines," a 10-part series from PeopleSafetyLab on the intersection of AI and family safety.