OpenAI Reverses Course on ChatGPT Restrictions

Sam Altman's announcement this week about ChatGPT-5 isn't just concerning—it's reckless when it comes to our young people.

As someone who works directly with students and educators navigating AI daily, this approach is deeply frustrating. This is what happens when tech companies prioritize growth over youth safety.

Let's be clear about what's happening:

OpenAI implemented restrictions on ChatGPT's relationship-building features less than two months ago in response to serious mental health concerns. Now, less than 60 days later and before we have any evidence these restrictions worked, they're reversing course—making ChatGPT "your friend again" and introducing erotic content.

The justification? Age-gating and the new restrictions will keep kids safe.

This justification is dangerously inadequate, and here's why:

First the research is clear, age-gating doesn't work:

  • 22% of children aged 8-17 falsely claim to be 18+ online (Ofcom 2024)

  • 52-58% of young users provide fake ages on major platforms

  • Recent Georgia Tech research on age classification models shows LLMs struggle to distinguish between teenagers and young adults

Recent data reveals a deeper pattern of concern: young users are turning to AI for emotional support—without adequate safeguards or education.

Secondly, young people are already using these tools for personal relationships, as shown in CDT's recent Hand-in-Hand research:

A recently released CDT report found the following statistics:

  • 42% of students use AI for mental health support, companionship, or to escape reality

  • 19% report romantic relationships with AI

Finally there's no evidence that ChatGPT-5 is safer. OpenAI has provided no data on:

  • Whether OpenAI's August's restrictions have been fully deployed or worked

  • What safety metrics they're tracking

  • What independent research validates this approach

If there has ever been a case of Big Tech paying lip service to a crisis while accelerating harmful practices, this is it. This is intentional design and deployment of features shown to cause harm to young people, with inadequate safeguards, in service of user growth.

Our students deserve better than being treated as an experiment in the pursuit of product-market fit.

Next
Next

EdTech Week: AI for Deeper Math and Literacy Learning