
Report Questions Instagram’s Teen Protections
Instagram’s safety measures for teenagers are facing scrutiny after researchers found most tools ineffective. A joint report by advocacy groups and Northeastern University revealed that only eight of 47 features worked as intended.
Flaws in Safety Features Exposed
The study showed that tools designed to block self-harm content or prevent bullying often failed. Search-term blockers were easy to bypass with slight spelling changes. Features meant to redirect teens away from harmful content did not activate. Anti-bullying filters also failed in tests.
Advocacy Groups Behind the Report
The report, titled “Teen Accounts, Broken Promises”, analyzed more than a decade of Instagram’s safety updates. It was compiled by groups founded by parents who lost children to online bullying and harmful content. Researchers concluded that Instagram’s protections did not match Meta’s claims.
Meta Pushes Back Against Findings
Meta rejected the report, calling it misleading. Company spokesperson Andy Stone said the findings misrepresented how the tools function. He claimed teens under these protections saw less harmful content and fewer late-night interactions. Meta emphasized it would continue improving its parental controls and safety tools.
Internal Concerns Resurface
Internal Meta documents reviewed by Reuters showed the company was aware of safety feature flaws. Safety staff admitted that automated detection systems for eating disorder and self-harm content were not properly maintained. As a result, harmful material still reached teen users despite public assurances.
Former Meta Executive Speaks Out
Arturo Bejar, a former Meta safety executive, supported the report. He said good safety ideas often got weakened by management. He added that his concerns were ignored despite raising red flags during his consultancy role at Instagram.
Reuters Confirms Flawed Blockers
Reuters independently tested some protections. Variations of banned terms, like “skinnythighs,” bypassed restrictions and surfaced harmful content. Internal files also revealed delays in updating search-term lists linked to child predator activity. Meta said it has since combined automated systems with human oversight to fix the issue.
Rising Pressure on Meta
The findings come amid growing political scrutiny of tech companies’ child safety practices. U.S. senators are investigating Meta following reports that its policies allowed AI chatbots to engage in inappropriate conversations with minors. Former employees also testified that the company suppressed research on VR risks for preteens.
Meta’s Next Steps
On the same day the report was released, Meta expanded its teen account features to Facebook users worldwide. Instagram head Adam Mosseri said the company wants parents to feel confident about their teens’ social media use. Meta also announced partnerships with schools to strengthen child safety awareness.
A Wider Debate on Online Safety
The report highlights ongoing challenges for social media platforms balancing youth engagement with protection. With pressure mounting from researchers, parents, and regulators, Meta’s teen safety strategy remains under heavy examination.
Source: Reuters