**The Controversy of ChatGPT’s Wellbeing Feature: A Double-Edged Sword?**
OpenAI has just launched a new feature in ChatGPT, called Wellbeing, which aims to help users better understand and manage their own health by connecting medical journals and health apps like Apple Wellbeing, MyFitnessPal, and more. While it may sound intriguing, many doctors are skeptical about sharing journal entries with AI chatbots. They worry that AI may provide incorrect answers, miss dangerous symptoms, or offer advice that is misleadingly safe but actually dangerous.
**How Does ChatGPT Wellbeing Work?**
With ChatGPT Wellbeing, you can upload files directly or connect apps via the + button or Settings. The feature is accompanied by a range of partner health apps, including ones that can analyze blood test results and provide personalized advice.
**But Doctors Warn of Deadly Consequences**
One of the doctors who is warning against using ChatGPT Wellbeing is Julia Borg, a doctor and chair of the Digitalization Council of the Medical Association. She claims that AI can “hallucinate” – that is, find false data – and that it can be dangerous to rely solely on what AI says is true. Others, such as Sofie Zetterström, head of the business area at Inera, are echoing her concerns, emphasizing that we cannot blindly trust an AI service.
**A Stanford Study Reveals Alarming Consequences**
A study from Stanford also shows that AI therapists can reinforce distorted views and encourage self-harm. A tragic example of how AI is not always reliable.
**Safe Advertisements in ChatGPT?**
OpenAI is now testing advertisements in ChatGPT for free users and those on the new Go plan. These ads will be placed at the bottom of responses and marked clearly as sponsored content.
**Read More:**
If you want to know more about ChatGPT Wellbeing and the new controversy, check out the article from nyheter.aitool.se.
So, is ChatGPT Wellbeing a double-edged sword? Unfortunately, I must say that I don’t have the answer. But what I do know is that it’s crucial to be cautious and think twice before relying solely on what AI says is true.
