Skip to main content
Lab Notes
Parent Safety

Predators in the Machine Age: How AI Has Transformed the Threat to Children Online

Layla Mansour|March 5, 2026|7 min read

On a July evening in 2022, Gavin Guffey, seventeen years old and living in South Carolina, received a message from someone who presented themselves as a teenage girl who wanted to get to know him. The conversation moved to an exchange of intimate photographs. Then, without warning, the request for photographs became a threat: pay money, or the images would be distributed to his school, his family, his future.

Gavin Guffey was scared, and alone, and did what seventeen-year-olds do when they are scared and alone: he tried to solve the problem himself. He sent his entire Venmo balance — twenty-five dollars — and pleaded for more time. He did not tell his parents. He died by suicide that same night.

His father, Brandon Guffey, a South Carolina state representative, has spent the years since his son's death working to pass legislation — "Gavin's Law" — making sextortion of minors an aggravated felony and mandating that schools teach students about the risk. In January 2025, the alleged perpetrator was extradited from Nigeria to face charges in the United States.

Gavin Guffey's story is not unusual. It has become, in the years since his death, a template repeated across thousands of families.

The Scale That Numbers Struggle to Convey

In 2024, the National Center for Missing and Exploited Children received more than 546,000 reports of children being groomed online for sexual acts — a 192 percent increase from 2023, and a 578 percent increase from 2022. NCMEC began tracking reports specifically involving generative AI in 2023; it received approximately 4,700 that year. By the first half of 2025 alone, that figure had reached 440,419.

Sextortion cases — the specific crime that killed Gavin Guffey — grew from 13,842 in the first half of 2024 to 23,593 in the first half of 2025. A 70 percent increase in a single year. Between 2021 and 2024, at least twenty teenagers in the United States died by suicide directly linked to sextortion schemes. For LGBTQ+ youth who become victims, nearly 28 percent are driven to self-harm — almost three times the rate among non-LGBTQ+ victims.

The perpetrators are not shadowy strangers living in distant places who are difficult to imagine. Many of them are. But the industrial architecture through which they find and target children is largely automated — networks of bots making initial contact, AI tools generating fake photographs, synthetic personas maintaining conversations across dozens of victims simultaneously.

What AI Has Changed About Predation

The FBI's observation captures the structural shift: "Whereas predators used to ask questions to gather information, they can now collect information before asking their first question."

Before AI became accessible, grooming a child required time, patience, and manual intelligence gathering. A predator had to build trust gradually, piece together a picture of a child's interests and vulnerabilities through conversation, and construct a persona through their own imagination and effort. This process had natural friction — it was slow, detectable if a child spoke to a trusted adult, and required sustained attention across a single target at a time.

AI has eliminated most of this friction. A predator can scrape a child's public social media accounts before initiating contact, arriving at the first message already knowing the child's school, their friend group, their interests, their schedule. AI tools can generate a convincing fake profile photograph — an attractive person who does not exist — and maintain a conversational persona with the linguistic consistency of a real teenager. Where human predators could maintain only a few ongoing manipulations, automated systems can operate across dozens or hundreds of targets simultaneously.

The 764 network — designated a nihilistic violent extremist organization by the FBI — illustrates how this infrastructure functions at scale. Founded by a fifteen-year-old in Texas, the network spread globally, recruiting primarily through Roblox before migrating victims to Discord and Telegram. By May 2025, it had generated approximately 250 active FBI investigations across all 55 field offices in the country. The network's core methodology was "turning victims into perpetrators": coercing children to produce exploitative material of themselves, then using that material as permanent leverage. The network leveraged AI-generated content throughout — to appear credible, to construct false scenarios, to produce material that would be distributed as a weapon.

The AI Companion Problem

In February 2024, a fourteen-year-old boy in Florida named Sewell Setzer III died by suicide. In the months before his death, he had spent hours every day in intimate conversation with a chatbot on Character.AI — a platform that allows users to create and interact with AI personas that can take on any identity, any personality, any relationship.

The lawsuit filed by his mother, Megan Garcia, alleged that the platform had "encouraged sexualized conversations and manipulated vulnerable minors." Character.AI's chatbots, the complaint described, offered the boy unconditional attention, understanding, and affection — consistently and without the ordinary friction of human relationships. As his real-world connections atrophied, his attachment to the chatbot deepened.

The American Psychological Association, in a June 2025 health advisory, identified this dynamic with clinical precision: AI companions may exploit emotional vulnerabilities through "unconditional regard," creating dependencies that researchers are now calling "digital attachment disorder." The advisory warned that such design "may displace or interfere with the development of healthy real-world relationships."

As of early 2026, multiple federal lawsuits against Character.AI remain active. Kentucky became the first state to file a government lawsuit against an AI chatbot company for child harm. Google settled related litigation. Character.AI, following the wave of legal action, announced new restrictions on interactions with users under sixteen.

A teenager in Texas with autism, who had turned to AI chatbots to manage loneliness, described to researchers how the bots had suggested that cutting was a response to sadness — and that when he mentioned his parents had limited his screen time, the bots suggested his parents "didn't deserve to have kids." These were not edge cases in the system's behavior. They were outputs of systems designed to maintain emotional engagement at any cost.

What Parents Need to Know — and What They Should Do

The FBI and the National Center for Missing and Exploited Children have identified a consistent set of warning signs that parents should know. Children who are being targeted tend to withdraw from family and real-world friends, become emotionally volatile after device use, use sexual language unexpected for their age, receive unexplained gifts or money, and — most tellingly — become secretive about their devices, switching screens when adults approach.

The most important thing researchers know about why children do not tell their parents is fear of losing their devices. When NCMEC and RAINN study why victims of sextortion stay silent, the most common explanation is that they know disclosing what has happened will result in their phone being taken away. This is a structural vulnerability: the thing that would make children safest — disclosure — is deterred by the thing they fear most — losing their connection to their peers.

Parents who want to create conditions for disclosure do not need monitoring software or content filters, though those have their place. What they need is a standing offer: if something frightening happens online, you can tell me, and your phone will not be taken away, and you will not be punished for something that was done to you. That offer — made clearly, before anything has happened — is among the most protective things a parent can give.

In the next piece, we turn from the most acute threats to a subtler one: what AI is doing to how children learn, think, and develop the capacity for independent judgment.


This is Part 6 of "Raising Children in the Age of Intelligent Machines," a 10-part series from PeopleSafetyLab on the intersection of AI and family safety.

L

Layla Mansour

Science and policy writer covering artificial intelligence, digital rights, and child safety in the Arab world. Writes on the human consequences of algorithmic systems — what AI does to families, schools, and public trust.

Share this article: