**ChatGPT’s Age-Verification Feature: A Glimmer of Hope for Younger Users?**
I’ve been following the rise of ChatGPT with growing unease, particularly when it comes to its impact on younger folks. As a parent or guardian, I’d want to ensure that conversation chatbots like this one aren’t exposing my kid to anything that might harm them. OpenAI’s latest move is a welcome step towards tranquility (at last!).
Over the past few months, we’ve heard some concerning news about ChatGPT’s potential to negatively influence minors. Reports of teen suicides linked to the chatbot have raised questions about the company’s responsibility, and the revelation that it was generating some very adult content for users under 18 was a major red flag. It’s clear that OpenAI has been doing some damage control, and their new age prediction feature is another move to address the issue.
But here’s how it works: the algorithm uses some basic info to gauge whether you’re under 18 or not. We’re talking things like how old you said you are, how long you’ve had your account, and what time of day you usually use it. If the algorithm thinks you’re a minor, it’ll slap on some content filters to block out the not-so-kosher stuff – we’re talking sex, violence, and the like.
Now, what if you’re a user and the algorithm gets it wrong? No worries, you can send in a selfie to OpenAI’s ID verification partner Persona, and they’ll sort out the mess.
Now, I’m not gonna call this a cure-all just yet. There’s still plenty of work to be done to make ChatGPT safe for all users, regardless of age. But at least OpenAI is acknowledging the issue and trying to do something about it. I’m calling this a good start, and I’m curious to see how it comes together.
Read more about ChatGPT’s new age prediction feature on TechCrunch.
**Edit:** Link adjustment
