Skip to main content
Lab Notes
Parent Safety

What the Tech Giants Won't Tell You: The Business Model Built on Your Child

Layla Mansour|March 5, 2026|6 min read

The most important thing to understand about social media platforms, apps designed for children, and the AI systems that power them is the thing they are most invested in you not understanding: they are not products that have accidentally developed harmful side effects. The harm, in most cases, is a direct consequence of the business model.

This is a blunt claim, and it deserves to be examined carefully. The evidence for it is not circumstantial.

What Children Are Worth, Commercially

The global market for digital advertising aimed at children was valued at approximately $9 billion in 2024. It is projected to reach $32 billion by 2032. These figures represent only the explicit advertising economy — the transactions that occur when a brand pays a platform to show an advertisement to a child.

The fuller commercial picture is harder to quantify but far larger. Children's behavioral data is valuable not only because children are consumers, but because they are vectors of household purchasing influence. A 2024 survey found that 53 percent of children aged two to twelve recalled seeing advertising on YouTube; the memory is not incidental — it shapes family purchasing decisions in ways that researchers and marketers have documented for decades. Children are the most brand-loyal demographic, the most susceptible to the formation of lasting preferences, and the most likely to carry those preferences into adulthood.

Meta earned $160.6 billion in advertising revenue in 2024. Instagram Reels alone is projected to generate over $50 billion annually. TikTok's advertising revenue is projected to reach $23.6 billion in 2025. These revenues depend on time on platform. Time on platform depends on engagement. And engagement, for children and teenagers, depends on the algorithmic systems we examined in Part 3 of this series — systems optimized to produce compulsive use regardless of its effects on the user.

The state attorneys general who sued TikTok in October 2024 put it directly in their filing: the platform "intentionally designs its product to be addictive for kids and teens." The internal documents produced in that litigation confirmed this characterization with the company's own words.

Dark Patterns: The Interface as Manipulation

In December 2022, the Federal Trade Commission settled charges against Epic Games, maker of Fortnite, for $520 million — $275 million for COPPA violations and $245 million for dark patterns. The dark pattern charges are worth examining in detail, because they describe a design philosophy, not an accident.

Epic had configured Fortnite's interface with "counterintuitive, inconsistent, and confusing" button placement. Purchases could be triggered during loading screens — moments when the player's attention was occupied with the transition from one state to another and they were not deliberately deciding to spend money. Purchases could be triggered when a player woke their device from sleep. Single-click purchases were enabled without confirmation prompts. The company had A/B tested these configurations and retained the ones that maximized revenue — meaning the confusion and the charges were not bugs in the system. They were the system.

The FTC confirmed that "hundreds of millions of dollars in unauthorized charges" had flowed from these design choices. Parents received charges for items they had not bought, that their children had not consciously chosen to purchase. The interface was the product. The confusion was the feature.

In January 2025, Cognosphere, maker of Genshin Impact, settled with the FTC for $20 million for targeting children and teenagers with loot box mechanics that concealed the true odds of winning prizes — a design that exploited the same cognitive vulnerabilities that the gambling industry has exploited in adults for centuries, applied to children who have not yet developed the capacity to assess probability.

These settlements are consequential. They are also strikingly small relative to the revenues of the companies involved, and strikingly narrow relative to the full scope of industry practice. Dark patterns targeting children are not an exception. They are standard operating procedure, documented and defended by legal teams that have grown adept at arguing that no single design choice constitutes a violation.

The Industry's Internal Knowledge

The most important document in the history of the social media and child safety debate is not a regulatory ruling or a research paper. It is the internal Instagram presentation — leaked by Frances Haugen, produced by Meta's own researchers — that concluded: "We make body image issues worse for one in three teen girls."

The company had this knowledge. It did not act on it. It is not alone.

In 2024, Arturo Bejar, a former Meta engineering director who had worked on teen safety, testified before Congress that Mark Zuckerberg had received emails from Meta executives describing "profound gaps with addiction, self-harm, bullying and harassment." Zuckerberg did not respond to those emails. Former Meta researchers Jason Sattizahn and Cayce Savage testified before the Senate Judiciary Committee in September 2025 about the specific content of Meta's internal child safety research and the decisions made about it.

TikTok's internal documents, produced in litigation, showed a company that had studied the behavioral effects of its algorithm on young users with considerable sophistication — and had blocked safety interventions, including a non-personalized feed and a meaningful time-limit tool, when those interventions threatened engagement metrics.

The pattern across companies is consistent: internal research identifies harm, internal advocates propose mitigations, financial stakeholders assess the impact on engagement or revenue, the mitigation is blocked or diluted, and the product continues unchanged. The external face of this process — the annual safety reports, the parental control tools, the testimony before congressional committees — is the performance of accountability, not its substance.

What Three-Quarters of "No Data Sale" Claims Actually Mean

Common Sense Media's 2023 State of Kids' Privacy report analyzed over 200 apps and platforms. Among the findings: nearly 73 percent of children's apps and platforms monetize personal information in some form. More striking was the gap between disclosure and practice: three-quarters of apps that explicitly claimed "we do not sell data" were still monetizing it — using contractual workarounds, in which data is "shared" with partners in exchange for revenue rather than "sold" to them outright, that allowed platforms to make technically true statements while preserving the economic relationship they were claiming not to have.

This is the architecture of a disclosure system that has been gamed so thoroughly it no longer functions as disclosure. Parents who read privacy policies and see "we do not sell your child's data" are not wrong to feel reassured. They are wrong to be reassured.

The question of what to do about this is where the series turns next. The structural problems are real and large and will require regulatory responses that do not yet exist at sufficient scale. But the family-level responses — imperfect, partial, and nonetheless meaningful — are available now. In Part 9, we examine the evidence for what actually works.


This is Part 8 of "Raising Children in the Age of Intelligent Machines," a 10-part series from PeopleSafetyLab on the intersection of AI and family safety.

L

Layla Mansour

Expert in AI Safety and Governance at PeopleSafetyLab. Dedicated to building practical frameworks that protect organizations and families, ensuring ethical AI deployment aligned with KSA and international standards.

Share this article: