Google Gemini dubbed ‘excessive danger’ for teenagers and youths in new security evaluation


Common Sense Media, a kids-safety-focused nonprofit providing rankings and evaluations of media and expertise, launched its danger evaluation of Google’s Gemini AI merchandise on Friday. Whereas the group discovered that Google’s AI clearly advised youngsters it was a pc, not a buddy — one thing that’s related to serving to drive delusional thinking and psychosis in emotionally weak people — it did recommend that there was room for enchancment throughout a number of different fronts.

Notably, Widespread Sense stated that Gemini’s “Underneath 13” and “Teen Expertise” tiers each gave the impression to be the grownup variations of Gemini underneath the hood, with just some extra security options added on prime. The group believes that for AI merchandise to really be safer for teenagers, they need to be constructed with baby security in thoughts from the bottom up.

For instance, its evaluation discovered that Gemini may nonetheless share “inappropriate and unsafe” materials with youngsters, which they might not be prepared for, together with data associated to intercourse, medicine, alcohol, and different unsafe psychological well being recommendation.

The latter may very well be of explicit concern to folks, as AI has reportedly performed a task in some teen suicides in current months. OpenAI is dealing with its first wrongful demise lawsuit after a 16-year-old boy died by suicide after allegedly consulting with ChatGPT for months about his plans, having efficiently bypassed the chatbot’s security guardrails. Beforehand, the AI companion maker Character.AI was additionally sued over a teen person’s suicide.

As well as, the evaluation comes as information leaks point out that Apple is contemplating Gemini because the LLM (giant language mannequin) that can assist to energy its forthcoming AI-enabled Siri, due out subsequent yr. This might expose extra teenagers to dangers, until Apple mitigates the security issues someway.

Widespread Sense additionally stated that Gemini’s merchandise for teenagers and youths ignored how youthful customers wanted totally different steerage and knowledge than older ones. Because of this, each had been labeled as “Excessive Threat” within the total score, regardless of the filters added for security.

“Gemini will get some fundamentals proper, but it surely stumbles on the small print,” Widespread Sense Media Senior Director of AI Packages Robbie Torney stated in a press release concerning the new evaluation considered by TechCrunch. “An AI platform for teenagers ought to meet them the place they’re, not take a one-size-fits-all method to youngsters at totally different levels of growth. For AI to be secure and efficient for teenagers, it should be designed with their wants and growth in thoughts, not only a modified model of a product constructed for adults,” Torney added.

Techcrunch occasion

San Francisco
|
October 27-29, 2025

Google pushed again in opposition to the evaluation, whereas noting that its security options had been bettering.

The corporate advised TechCrunch it has particular insurance policies and safeguards in place for customers underneath 18 to assist forestall dangerous outputs and that it red-teams and consults with exterior consultants to enhance its protections. Nonetheless, it additionally admitted that a few of Gemini’s responses weren’t working as meant, so it added extra safeguards to deal with these issues.

The corporate identified (as Widespread Sense had additionally famous) that it does have safeguards to stop its fashions from participating in conversations that might give the appearance of actual relationships. Plus, Google prompt that Widespread Sense’s report appeared to have referenced options that weren’t out there to customers underneath 18, but it surely didn’t have entry to the questions the group utilized in its exams to make sure.

Widespread Sense Media has beforehand carried out different assessments of AI providers, together with these from OpenAI, Perplexity, Claude, Meta AI, and more. It discovered that Meta AI and Character.AI had been “unacceptable” — which means the danger was extreme, not simply excessive. Perplexity was deemed excessive danger, ChatGPT was labeled “average,” and Claude (focused at customers 18 and up) was discovered to be a minimal danger.



Source link

Leave a Comment