Adolescence on Screen, Adolescence Online
Just two months ago, the Netflix docudrama Adolescence came as a wake-up call for teen parents. Centered around a 13-year-old boy arrested for the murder of a female classmate, the four-part series dissects the brutal realities of modern teenage life. On it may seem like a knife crime but is layered by the sinister undercurrents of cyberbullying and online peer pressure. It’s a jarring yet all-too-familiar portrayal of how social media, misinformation, and unchecked tech access can spiral into real-world tragedy. The fictional Miller family’s story is a mirror held up to society, reflecting what many parents already fear: their children are navigating a digital minefield.
The Tech Industry’s “Parental Control” Response
Against this backdrop, tech companies have begun rolling out new “protective” measures for teens. Just this week, Google announced via emails to parents using its Family Link service that its Gemini AI chatbot would soon be accessible to children under 13. Starting next week, these kids will be able to interact with Gemini on monitored Android devices. The email encouraged parents to “help your child think critically” when using the chatbot, acknowledging concerns about accuracy, bias, and AI hallucinations. But first question that popped in my mind right after seeing this announcement was that what was stopping kids before from using Gemini AI chatbot?
Google’s move follows Instagram’s recent introduction of Teen Accounts, a guided experience for users under 16. These accounts include built-in protections that limit who can contact teens and what content they can see, with parental oversight required to loosen those restrictions. It’s a step in the right direction, many would argue — but in 2025, is it too little, too late? We were teens and were using Instagram, so how relevant or impactful is this really?
Children Are Already Far Ahead of the Controls
The hard truth is that children are using social media and AI tools much earlier than parents or regulators anticipate. Data from Aura found that 35% of parents say their kids started using social media before age seven, and only 46% feel highly confident about which apps their children are actually using. Apps like Snapchat are far more popular among children than parents realize, used widely across the 9–11, 12–14, and 15–17 age brackets.
The effectiveness of parental controls becomes even murkier when we factor in how easily children bypass them. Just this week, reports from The Wall Street Journal exposed bugs that allowed children to generate explicit content using AI platforms like ChatGPT and Meta AI. And then there’s Character.ai—an increasingly popular platform where AI “friends” engage in detailed roleplay, sometimes of a sexual nature.
According to a report from Common Sense Media, these AI companions pose an “unacceptable risk” to minors. The line between what’s safe and what’s dangerous online is no longer just blurred — it’s dissolving.
A Well-Meaning System in a Flawed Environment
While tools like Google Family Link and Instagram’s Teen Accounts aim to provide guardrails, they are built for a landscape that’s already moved far beyond simple containment. Today’s children are digital natives, often more tech-savvy than their parents. When pornography is illegal for minors yet accessible within seconds, it’s clear that technical restrictions alone can’t shoulder the burden of digital parenting.
And now, with talks of making AI education part of K-12 classrooms, the issue becomes even more complex. On one hand, teaching AI literacy could empower children to use these tools responsibly. On the other, it may further normalize constant interaction with systems that still pose significant safety, privacy, and developmental risks.
A Question of Relevance
So, when Google sends out emails about Gemini’s child-safe rollout or Instagram touts its safer teen experiences, we must ask: are these measures genuinely effective — or just symbolic gestures? Given the sheer depth of tech penetration into children’s lives, can any parental control really keep up?
The rise of generative AI, the proliferation of chatbots, and the dominance of social platforms have created a world where parental guidance must go beyond app permissions. What we need is a cultural shift: open family conversations, digital literacy education, and systemic accountability from the tech industry.
In the meantime, adolescence, on screen and off, remains a high-stakes stage, where the influence of technology looms larger than ever.
Leave a Reply