OpenAI Adds Age Prediction to ChatGPT to Protect Young Users

Key Highlights:

  • The tool applies stricter content filters for minors automatically.
  • Signals include stated age, account history, and usage patterns.
  • Adults misidentified as minors can verify identity through Persona.

OpenAI has introduced a new “age prediction” feature in ChatGPT to help identify minors and apply stronger content limits. The move aims to reduce harmful interactions and prevent young users from accessing adult topics. It comes amid growing global concern over how AI tools affect children.

The update expands OpenAI’s existing safety systems. Instead of relying only on a user’s stated age, ChatGPT now uses an AI model to assess whether an account likely belongs to someone under 18.

OpenAI announced the change in a blog post, calling it part of a broader effort to “put sensible content constraints” around conversations involving young users.

Why is OpenAI adding age prediction now?

OpenAI has faced repeated criticism over ChatGPT’s impact on children. In recent years, teen suicides have been linked to chatbot interactions. The company has also been faulted for allowing sexual or mature discussions with underage users.

Last year, OpenAI had to fix a bug that allowed ChatGPT to generate erotica for users under 18. Regulators and child safety groups have since pushed AI firms to do more.

This new system signals a shift from self-reporting to behavioral detection. It places responsibility on the platform, not the child.

How does ChatGPT guess a user’s age?

The system looks at “behavioral and account-level signals.” According to OpenAI, these include:

  • The age a user claims
  • How long the account has existed
  • When the account is usually active
  • Patterns in how the service is used

The model does not rely on a single clue. It weighs multiple signals to estimate whether an account likely belongs to a minor.

If ChatGPT flags an account as under 18, it automatically applies stricter filters. These block or limit topics related to sex, violence, and other sensitive areas.

What if ChatGPT gets it wrong?

OpenAI says mistakes can happen. Users who are wrongly classified as minors can restore full access.

They must verify their age through Persona, OpenAI’s identity partner. The process involves submitting a selfie for confirmation. Once verified, the account regains adult-level access.

This approach mirrors systems used by social networks and gaming platforms.

What this means for users

For adults, the change may feel invisible. For teens, it reshapes how ChatGPT responds. The tool will steer conversations away from risky content by default.

For OpenAI, it is a signal to regulators that the company is tightening guardrails.

As AI becomes a daily companion for millions, OpenAI is betting that age-aware systems will become a baseline for responsible design.

115 Views