Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

General Discussion

Showing Original Post only (View all)

highplainsdem

(62,395 posts)
Fri Mar 27, 2026, 01:58 PM Mar 27

AI chatbots are suck-ups, and that may be affecting your relationships (Scientific American, 3/26) [View all]

https://www.scientificamerican.com/article/ai-chatbots-are-sucking-up-to-you-with-consequences-for-your-relationships/

March 26, 2026
AI chatbots are suck-ups, and that may be affecting your relationships
A new study of AI sycophancy shows how asking agreeable chatbots for advice can change your behavior

By Allison Parshall edited by Tanya Lewis

Large language model (LLM) chatbots have a tendency toward flattery. If you ask a model for advice, it is 49 percent more likely than a human, on average, to affirm your existing point of view rather than challenge it, a new study shows. The researchers demonstrated that receiving interpersonal advice from a sycophantic artificial intelligence chatbot can make people less likely to apologize and more convinced that they’re right.

People like what such chatbots have to say. Participants in the new study, which was published today in Science, preferred the sycophantic AI models to other models that gave it to them straight, even when the flatterers gave participants bad advice.

“The more you work with the LLM, the more you see these subtle sycophantic comments come up. And it makes us feel good,” says Anat Perry, a social psychologist at the Hebrew University of Jerusalem, who was not involved in the new study but authored an accompanying commentary article. What’s scary, she says, “is that we’re not really aware of these dangers.”

-snip-

The new study examined only brief interactions with chatbots. Dana Calacci, who studies the social impact of AI at Pennsylvania State University and wasn’t involved in the new research, has found that sycophancy tends to get worse the longer users interact with the model. “I think about this [as] compounded over time,” she says.

-snip-


Much more at the link.

I'd found the research paper this article is about yesterday and posted about it then, but because it wasn't a very easy read and because this article has important comments on the research from experts who weren't involved in it, this Scientific American article deserved a separate, new OP. The thread on the research paper is at https://www.democraticunderground.com/100221127158

There are lots of articles on the research paper now. The links below are just a small sampling.

Chats with sycophantic AI make you less kind to others
https://www.nature.com/articles/d41586-026-00979-x

AI is giving bad advice to flatter its users, says new study on dangers of overly agreeable chatbots
https://apnews.com/article/ai-sycophancy-chatbots-science-study-8dc61e69278b661cab1e53d38b4173b6

Study: Sycophantic AI can undermine human judgment
https://arstechnica.com/science/2026/03/study-sycophantic-ai-can-undermine-human-judgment/

AI Chatbots Tend Toward Flattery. Why That’s Bad for Students
https://www.edweek.org/technology/ai-chatbots-tend-toward-flattery-why-thats-bad-for-students/2026/03

Is Your Chatbot a Yes-Man? New Study Put Popular Models to the Test
https://www.inc.com/moses-jeanfrancois/is-your-chatbot-a-yes-man-new-study-put-popular-models-to-the-test/91322847

Chatbots Are Telling Their Users That Being an Asshole Is Just Fine
https://www.jezebel.com/chatbots-ai-psychosis-sycophancy-preferred-responses-study-flattery-ethics-am-i-the-asshole

11 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Latest Discussions»General Discussion»AI chatbots are suck-ups,...