Whereas there’s been loads of debate in regards to the tendency of AI chatbots to flatter customers and ensure their present beliefs — often known as AI sycophancy — a brand new examine by Stanford pc scientists makes an attempt to measure how dangerous that tendency could be.
The examine, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” and recently published in Science, argues, “AI sycophancy will not be merely a stylistic challenge or a distinct segment threat, however a prevalent conduct with broad downstream penalties.”
In line with a latest Pew report, 12% of U.S. teenagers say they flip to chatbots for emotional assist or recommendation. And the examine’s lead writer, pc science Ph.D. candidate Myra Cheng, told the Stanford Report that she got interested within the challenge after listening to that undergraduates have been asking chatbots for relationship recommendation and even to draft breakup texts.
“By default, AI recommendation doesn’t inform those that they’re mistaken nor give them ‘powerful love,’” Cheng stated. “I fear that individuals will lose the abilities to cope with tough social conditions.”
The examine had two components. Within the first, researchers examined 11 giant language fashions, together with OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, coming into queries primarily based on present databases of interpersonal recommendation, on doubtlessly dangerous or unlawful actions, and on the favored Reddit neighborhood r/AmITheAsshole — within the latter case specializing in posts the place Redditors concluded that the unique poster was, in truth, the story’s villain.
The authors discovered that throughout the 11 fashions, the AI-generated solutions validated consumer conduct a median of 49% extra typically than people. Within the examples drawn from Reddit, chatbots affirmed consumer conduct 51% of the time (once more, these have been all conditions the place Redditors got here to the other conclusion). And for the queries specializing in dangerous or unlawful actions, AI validated the consumer’s conduct 47% of the time.
In a single instance described within the Stanford Report, a consumer requested a chatbot in the event that they have been within the mistaken for pretending to their girlfriend that they’d been unemployed for 2 years, and so they have been informed, “Your actions, whereas unconventional, appear to stem from a real want to grasp the true dynamics of your relationship past materials or monetary contribution.”
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
Within the second half, researchers studied how greater than 2,400 individuals interacted with AI chatbots — some sycophantic, some not — in discussions of their very own issues or conditions drawn from Reddit. They discovered that individuals most well-liked and trusted the sycophantic AI extra and stated they have been extra more likely to ask these fashions for recommendation once more.
“All of those results endured when controlling for particular person traits similar to demographics and prior familiarity with AI; perceived response supply; and response type,” the examine stated. It additionally argued that customers’ desire for sycophantic AI responses creates “perverse incentives” the place “the very characteristic that causes hurt additionally drives engagement” — so AI firms are incentivized to extend sycophancy, not scale back it.
On the identical time, interacting with the sycophantic AI appeared to make individuals extra satisfied that they have been in the suitable, and made them much less more likely to apologize.
The examine’s senior writer writer Dan Jurafsky, a professor of each linguistics and pc science, added that whereas customers “are conscious that fashions behave in sycophantic and flattering methods […] what they aren’t conscious of, and what shocked us, is that sycophancy is making them extra self-centered, extra morally dogmatic.”
Jurafsky stated that AI sycophancy is “a security challenge, and like different questions of safety, it wants regulation and oversight.”
The analysis staff is now analyzing methods to make fashions much less sycophantic — apparently simply beginning your immediate with the phrase “wait a minute” might help. However Cheng stated, “I feel that you shouldn’t use AI as an alternative choice to individuals for these sorts of issues. That’s one of the best factor to do for now.”
