Close Menu
    Facebook X (Twitter) Instagram
    Articles Stock
    • Home
    • Technology
    • AI
    • Pages
      • About us
      • Contact us
      • Disclaimer For Articles Stock
      • Privacy Policy
      • Terms and Conditions
    Facebook X (Twitter) Instagram
    Articles Stock
    AI

    Medical doctors assume AI has a spot in healthcare — however possibly not as a chatbot

    Naveed AhmadBy Naveed Ahmad14/01/2026Updated:03/02/2026No Comments4 Mins Read
    GettyImages 1369564450

    **The Great Debate: Can AI Chatbots Really Help or Harm Healthcare?**

    As AI technology continues to advance, it’s only natural that we’re seeing more and more applications of it in the healthcare industry. Take, for example, the recent launch of OpenAI’s dedicated ChatGPT Health chatbot, which allows users to have private conversations about their health without their data being used to train the underlying AI model.

    On the surface, it sounds like a great idea – after all, who wouldn’t want more personalized and convenient healthcare options? But, as Dr. Sina Bari, a working surgeon and AI healthcare chief at iMerit, knows all too well, AI chatbots can also lead patients astray with faulty medical advice.

    In fact, Dr. Bari recently had a patient come in with a printout of a conversation from ChatGPT that claimed a specific medication had a 45% chance of causing pulmonary embolism. But, after further investigation, Dr. Bari discovered that the statistic was actually from a paper about a specific subgroup of people with tuberculosis – not his patient at all.

    Despite this, Dr. Bari seems more pleased than concerned about the new ChatGPT Health chatbot. “It’s something that’s already happening, so formalizing it to protect patient data and put some safeguards around it… is going to make it even more effective for patients to use,” he said.

    However, as with any technology, there are valid concerns about how the chatbot collects and uses medical information. For instance, users can import their medical data and sync it with apps like Apple Health and MyFitnessPal – raising some serious red flags for privacy warriors.

    “I’m curious to see how the regulators will approach this,” said Itai Schwartz, co-founder of data loss prevention agency MIND. “There’s medical information moving from HIPAA-compliant organizations to non-HIPAA-compliant distributors – it’s a concern.”

    But, despite the potential risks, many professionals in the healthcare industry are optimistic about the potential benefits of AI chatbots. “This was one of the biggest use cases of ChatGPT,” said Andrew Brackin, a partner at Gradient who invests in health tech. “So it makes sense that they would want to build a more private, safe, optimized version of ChatGPT for healthcare questions.”

    Of course, AI chatbots also have a persistent problem with hallucinations – a very delicate issue in healthcare. OpenAI’s GPT-5 is more prone to hallucinations than many Google and Anthropic models. But, AI companies see the potential to rectify inefficiencies in the healthcare system.

    For Dr. Nigam Shah, a professor of medicine at Stanford and chief data scientist for Stanford Health Care, the lack of American patients accessing care is a more pressing issue than the risk of AI chatbots dishing out poor advice. “Right now, you go to any health system and you need to see the primary care physician – the wait time will be 3-6 months,” Dr. Shah said. “If your choice is to wait six months for a real doctor, or talk to something that’s not a doctor but can do some things for you, which would you choose?”

    Dr. Shah thinks a clearer path to introduce AI into healthcare systems comes from the supplier side, rather than the patient side. Medical journals have often reported that administrative tasks can eat up about half of a primary care doctor’s time, which slashes the number of patients they can see in a given day. If that kind of work can be automated, doctors would be able to see more patients – maybe reducing the need for people to use tools like ChatGPT Health without more input from a real doctor.

    As AI and medicine become more intertwined, there’s an inescapable tension between the two worlds – a doctor’s main incentive is to help their patients, while tech companies are ultimately accountable to their shareholders, even if their intentions are noble. “I think that tension is a crucial one,” Dr. Bari said. “Patients rely on us to be cynical and conservative to protect them.”

    The debate continues – can AI chatbots really help or harm healthcare? Only time will tell.

    Naveed Ahmad

    Related Posts

    Are You ‘Agentic’ Sufficient for the AI Period?

    27/02/2026

    Jack Dorsey simply halved the scale of Block’s worker base — and he says your organization is subsequent

    27/02/2026

    Perplexity Simply Launched pplx-embed: New SOTA Qwen3 Bidirectional Embedding Fashions for Internet-Scale Retrieval Duties

    27/02/2026
    Leave A Reply Cancel Reply

    Categories
    • AI
    Recent Comments
      Facebook X (Twitter) Instagram Pinterest
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.