

In April, Meta quietly reversed itself after removing an Instagram post honoring older lesbian relationships in Brazil. The excised post was not sexual in nature and did not contain material harmful to minors. The post in question documented a snapshot from a moment in history where lesbians were forced to hide their relationships as “roommates” or “gal pals” and their love was scrubbed from the public record. Nevertheless, Meta removed the content.
Meta cited its hate speech rules. The Oversight Board later acknowledged what should have been obvious from the start: The Brazil case was an instance of over‑enforcement against a marginalized community, driven by automated systems that could not read context, reclaimed language, or even the full post itself. The content was restored only after outside intervention and advocacy from the LGBTQ+ community.
Mashable 101 Fan Fave: Vote for your favorite creator today!
This case is now being treated as a narrow content moderation error, but policymakers need to recognize that it signals a clear warning about what happens when lawmakers push platforms to police content instead of fixing design. Across the country, states are rushing to “protect kids online” by restricting access to social media or pressuring companies to remove vaguely defined “harmful” content. But what happened in Brazil shows the human cost of that approach.
When platforms are incentivized to remove speech quickly and at scale, they do not become better judges of nuance. Social media becomes a blunt instrument, and the first people hit are those whose stories require human context and radical empathy to be understood.
If lawmakers actually want to protect kids, they should stop asking platforms to decide which stories are acceptable and start regulating core design choices that cause harm in the first place, like endless scroll, engagement‑based recommendations, and surveillance‑driven feeds.
Here’s why that distinction matters, especially for LGBTQ+ kids and other marginalized communities, like neurodivergent kids. LGBTQ+ young people are far more likely than their peers to rely on online spaces to find community, information, and support, often because those things are unavailable or unsafe at home or school. But they are also significantly more likely to end up in unsafe online interactions: harassment, grooming, doxxing, or being pushed into high‑risk spaces they didn’t seek out.
In Australia, after a social media ban on anyone under 16 was enacted, disability rights advocates noted that autistic youth were cut off from some of the only support and peer networks available to them.
Recommendation systems don’t understand vulnerability, but they understand engagement. When a queer kid searches for community, platforms often respond by aggressively amplifying whatever keeps them clicking. Usually, this means increasingly sexualized content, adult strangers, extremist rhetoric, or predatory accounts that know exactly how to exploit isolation.
Infinite scroll makes disengagement much harder for adolescents, according to the Electronic Privacy Information Center, even more so for those in vulnerable communities. Algorithmic “friend” or “account” suggestions collapse liminal boundaries between teens and adults. Weak defaults make it difficult to block, mute, or disappear.
Young people, not just LGBTQ+ young people, are exposed to harm online because platforms are built to extract attention, not protect users. Parents are right to be worried and to advocate for change. But a content-based framing misses the real problem.
The greatest risks kids face online don’t come from a single bad post slipping through moderation, but from automated systems that push content at kids they didn’t ask for, connect them to people they don’t know, and keep them scrolling long after warning signs appear.
Policymakers at both the state and federal levels need to design regulations that address those risks directly. Age‑appropriate design codes don’t tell platforms what speech to allow, but they can tell platforms how to behave. Design codes require safer defaults, like limits on behavioral profiling, stronger blocking tools, reduced amplification of unsolicited recommendations, and guardrails that slow down virality and compulsive use.
The public should advocate for product refinement, rather than infringement of First and Fourth Amendment rights. Design codes reduce the chance that a curious or lonely kid is algorithmically funneled into danger, like I was, searching for community and nudged toward risk by systems that did not care who I was.
Age‑appropriate design codes offer a way out of this mess. By regulating how platforms are built rather than what people are allowed to say, design code laws reduce harm without turning companies into cultural censors. They don’t require platforms to interpret reclaimed slurs, queer history, or political speech. Companies should instead be required to stop engineering addiction and risk.
We don’t need more content or platform bans. We need fewer harmful systems. If we’re serious about protecting kids online, especially the ones already most at risk, this case reminds us exactly where to start.
This article reflects the opinion of the writer.
Lennon Torres is a former Dance Moms performer now fighting for young people’s safety online. A trans activist and University of Southern California alum, she uses her pop‑culture fluency and lived experience to power her work at the Heat Initiative, taking on tech giants and demanding platforms to protect and empower the next generation.



