Close Menu
    Facebook X (Twitter) Instagram
    Articles Stock
    • Home
    • Technology
    • AI
    • Pages
      • About us
      • Contact us
      • Disclaimer For Articles Stock
      • Privacy Policy
      • Terms and Conditions
    Facebook X (Twitter) Instagram
    Articles Stock
    AI

    ChatGPT’s new GPT-5.3 Prompt mannequin will cease telling you to settle down

    Naveed AhmadBy Naveed Ahmad04/03/2026Updated:04/03/2026No Comments3 Mins Read
    1772571292 GettyImages 2195918462


    Take a breath, cease spiraling. You’re not loopy, you’re simply harassed. And truthfully, that’s okay.

    Should you felt instantly triggered studying these phrases, you’re in all probability additionally sick of ChatGPT consistently speaking to you as should you’re in some kind of disaster and wish delicate dealing with. Now, issues could also be enhancing. OpenAI says its new mannequin, GPT-5.3 Prompt, will cut back the “cringe” and different “preachy disclaimers.”

    Based on the mannequin’s launch notes, the GPT-5.3 replace will deal with the person expertise, together with issues like tone, relevance, and conversational circulation — areas that will not present up in benchmarks, however could make ChatGPT really feel irritating, the corporate mentioned.

    Or, as OpenAI put it on X, “We heard your suggestions loud and clear, and 5.3 Prompt reduces the cringe.”

    Within the firm’s instance, it confirmed the identical question with responses from the GPT-5.2 Prompt mannequin in contrast with the GPT-5.3 Prompt mannequin. Within the former, the chatbot’s response begins, “Initially — you’re not damaged,” a typical phrase that’s been getting below everybody’s pores and skin currently.

    Within the up to date mannequin, the chatbot as an alternative acknowledges the issue of the scenario, with out making an attempt to immediately reassure the person.

    The unbearable tone of ChatGPT’s 5.2 mannequin has been annoying customers to the purpose that some have even canceled their subscriptions, in accordance with quite a few posts on social media. (It was a huge point of discussion on the ChatGPT Reddit, for example, earlier than the Pentagon deal stole the main target.)

    Folks complained that one of these language, the place the bot talks to you as if it assumes you’re panicking or harassed if you have been simply searching for info, comes throughout as condescending.

    Typically, ChatGPT replied to customers with reminders to breathe and different makes an attempt at reassurance, even when the scenario didn’t warrant it. This made customers really feel infantilized, in some instances, or as if the bot was making assumptions in regards to the person’s psychological state that simply weren’t true.

    As one Reddit person lately pointed out, “nobody has ever calmed down in all of the historical past of telling somebody to settle down.”

    It’s comprehensible that OpenAI would try to implement guardrails of some type, particularly because it faces a number of lawsuits accusing the chatbot of main folks to expertise damaging psychological well being results, which typically included suicide.

    However there’s a fragile steadiness between responding with empathy and offering fast, factual solutions. In any case, Google by no means asks you about your emotions if you’re looking for info.



    Source link

    Naveed Ahmad

    Related Posts

    Find out how to Construct a Steady and Environment friendly QLoRA Effective-Tuning Pipeline Utilizing Unsloth for Giant Language Fashions

    04/03/2026

    AI corporations are spending thousands and thousands to thwart this former tech exec’s congressional bid

    04/03/2026

    Android customers can now share tracker tag information with airways to assist find misplaced baggage

    04/03/2026
    Leave A Reply Cancel Reply

    Categories
    • AI
    Recent Comments
      Facebook X (Twitter) Instagram Pinterest
      © 2026 ThemeSphere. Designed by ThemeSphere.

      Type above and press Enter to search. Press Esc to cancel.