Menu
Menu
Superbloom

A photo of a lit candle with stars in the dark.

A few weeks ago, a photo on LinkedIn from the TikTok For You(th) Summit in London caught my eye. Baby pink background, cheerful branding, adults discussing safety and well-being for youth. It was optimistic. 

This was a stark contrast to Amnesty Tech’s report on TikTok’s treatment of teens, which I stumbled upon only hours before. The findings specifically focused on how accounts coded as children were pulled into self-harm content spirals within hours after signup. Those two moments, the glossy summit and the grim findings, sat side by side on my screen like two realities that cannot co-exist.

One is the safety story platforms tell us. The other is what research continues to show.

And this isn’t unique to TikTok. Instagram recently received its own reckoning in Teen Accounts: Broken Promises, a report co-authored by the Meta whistleblower who has been warning the company internally for years. The headline number is devastating:

64% of Instagram’s safety mechanisms for teens were “red” — meaning they didn’t work or were easily bypassed. Only 17% worked as promised.

Meta says children are protected. The numbers suggest children are navigating these systems mostly alone.

As a Hungarian, design-focused digital rights researcher — and a parent — I’ve come to recognise the pattern: big tech platforms, like governments, often prioritize the performance of care instead of delivering it. 

We’ve also witnessed childhood safety being compromised and justified not only on social platforms but by generative AI chatbot firms, too. Look no further than the recent OpenAI suicide case, where a child used ChatGPT to assist them in taking their own life. The company argues that a 16-year-old violated its terms by bypassing safety measures, and that its FAQ warned users not to rely on outputs without independent verification. 

Expecting a teenager to read, understand, and comply with the terms of service is not child protection. It is abdication. Terms of use and FAQs are not safety measures; they are legal shields. They exist to manage corporate liability, not to protect a child in crisis.

Offloading responsibility onto minors by asking them to decipher legal language, interpret abstract warnings, and regulate their behavior while interacting with a persuasive, human-like system is the opposite of safety by design. It is a calculated transfer of risk from companies to children.

And let’s be clear: if a “safeguard” can be breached by a child through ordinary, foreseeable use, then it was never a safeguard at all. A protection a child cannot understand, access, or use in the moment it matters is not protection. It is a fiction and a dangerous one.

Might 2026 signal a long-awaited shift?

The EU’s Digital Services Act (DSA), passed in 2022, was designed to finally give regulators the power to intervene when digital platforms harm users. DSA’s Article 28 requires platforms to make environments safe and age-appropriate for minors by design. One part of the Article 28’s guidelines deals specifically with children’s user experience: the buttons they see, the defaults they’re nudged into, the friction they meet when trying to protect themselves.

Confronting this is vital because children are not harmed only by content, they’re harmed by the design choices that surround them. Children encounter everything from autoplay to infinite scrolling, night-time notifications, default-open DMs and perhaps even a “safety” toggle buried under six cheerful menus. These harms are not distributed evenly: marginalized children – racialized, disabled, queer, migrant, or low-income youth – often face compounded risks and fewer accessible protections. 

None of this is random. All of it is measurable if we measure the right things. These design choices don’t appear in a vacuum. They are embedded in business models that depend on maximizing attention, data extraction, and engagement. Any meaningful safety effort should confront not only interface issues but also the incentives beneath them.

And now the European Parliament is pushing even harder: proposing a 16+ minimum age for social media, banning engagement-based recommender systems for minors, and switching off addictive features like infinite scroll and autoplay by default. While the EU currently leads on regulatory frameworks, children’s digital experiences and the harms they face vary widely globally. Any approach to platform safety must avoid exporting a single regional standard as universal, and must include voices and realities from the Global Majority.

So, we face a crucial question: How do we study manipulative platform design without exposing children to harm, and in a way that regulators can enforce? Most research still focuses on what children see, and that is important. We are interested in understanding how they get there. In our work, we don’t chase virality metrics or moderation logs. We look at flows. Because harm is rarely a single feature. It’s the journey: the clicks it takes to report abuse, the invisibility of the “off” switch, how fast a child can exit once something feels wrong.

How might we avoid exposing children to harmful feeds, while still learning what matters—namely, whether safety features are truly usable?

Can they turn off tracking? Can they delete an account? Can they limit recommendations, set time boundaries, and exit when things don’t feel right?

If reporting a harmful interaction takes seven taps and a trip outside the app, only 0.02% of teens will get help—as Meta’s own data shows. UX is not superficial: it is the mechanism through which harm travels. If a safety control is unusable or inaccessible to a child, it is effectively not there. And that’s problematic for children, parents and society as a whole. 

Traditional oversight models rely on content analysis, algorithmic audits, or risk assessment documentation provided by platforms themselves. All are valuable but not sufficient for effectively measuring compliance of platform design-related issues.

Civil society faces two barriers:

  1. We cannot ethically expose minors to harmful environments to study them. And when we involve young people in research, that participation must be ethical, compensated, consent-driven, and grounded in long-term relationships not extractive.
  2. This research must be replicable and affordable, not just something a well resourced organisation can do once.

The good news is that in user research, when a feature is broken, it breaks fast. We don’t need thousands of minors, just small, diverse cohorts of users. Patterns reveal themselves quickly.  Children and teens are not only vulnerable. They are experts in their own experiences. Effective safety systems must be shaped with them, not merely for them, especially across lines of race, disability, gender, and socioeconomic background.

The part that needs to scale isn’t the children. It’s the structure around them.

  • interface maps of entire safety journeys
  • shared taxonomies of manipulative patterns
  • standardized walkthrough scripts for regulators
  • usability & accessibility heuristics tied to Article 28

Heuristics, in plain language, are rule-of-thumb tests for whether a design works. Did the child understand the setting? Could they find it? Did it behave as expected? If not, it fails, even if it “exists” on paper.

This matters because compliance has often meant: the safety feature exists somewhere. What we need is a safety feature that works for the users it is intended to serve, and it protects them.

And if recent revelations about Meta have shown us anything - from lawsuits alleging intentionally ineffective youth safety tools to internal reports where researchers were allegedly told not to collect data on under-13s “due to regulatory concerns” - it’s that platforms cannot be relied on to evaluate their own design risks. Someone else has to test the interface.

So, what should change look like in 2026?

Here’s the hopeful version, and I think it’s still possible:

  • Default-safe design, not default-addiction.
  • Safety features that work, not safety features that just exist.
  • Safety feature design needs to be co-designed with children. 
  • Shared interface design audit tooling so every investigation doesn’t restart from zero.

Regulators don’t just need data. They need methods. And they are starting to say so out loud. Parents across the political spectrum want change, and patience is gone. 2026 won’t be the year we fix everything. But it could be the year we stop pretending and start acting. 

No single team, researcher, regulator, or NGO can solve this. Durable change requires shared accountability across youth, parents, educators, civil society, platform workers, and regulators. Our work is one piece of a much larger ecosystem.