Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Showing Original Post only (View all)AI chatbots are suck-ups, and that may be affecting your relationships (Scientific American, 3/26) [View all]
https://www.scientificamerican.com/article/ai-chatbots-are-sucking-up-to-you-with-consequences-for-your-relationships/March 26, 2026
AI chatbots are suck-ups, and that may be affecting your relationships
A new study of AI sycophancy shows how asking agreeable chatbots for advice can change your behavior
By Allison Parshall edited by Tanya Lewis
Large language model (LLM) chatbots have a tendency toward flattery. If you ask a model for advice, it is 49 percent more likely than a human, on average, to affirm your existing point of view rather than challenge it, a new study shows. The researchers demonstrated that receiving interpersonal advice from a sycophantic artificial intelligence chatbot can make people less likely to apologize and more convinced that theyre right.
People like what such chatbots have to say. Participants in the new study, which was published today in Science, preferred the sycophantic AI models to other models that gave it to them straight, even when the flatterers gave participants bad advice.
The more you work with the LLM, the more you see these subtle sycophantic comments come up. And it makes us feel good, says Anat Perry, a social psychologist at the Hebrew University of Jerusalem, who was not involved in the new study but authored an accompanying commentary article. Whats scary, she says, is that were not really aware of these dangers.
-snip-
The new study examined only brief interactions with chatbots. Dana Calacci, who studies the social impact of AI at Pennsylvania State University and wasnt involved in the new research, has found that sycophancy tends to get worse the longer users interact with the model. I think about this [as] compounded over time, she says.
-snip-
AI chatbots are suck-ups, and that may be affecting your relationships
A new study of AI sycophancy shows how asking agreeable chatbots for advice can change your behavior
By Allison Parshall edited by Tanya Lewis
Large language model (LLM) chatbots have a tendency toward flattery. If you ask a model for advice, it is 49 percent more likely than a human, on average, to affirm your existing point of view rather than challenge it, a new study shows. The researchers demonstrated that receiving interpersonal advice from a sycophantic artificial intelligence chatbot can make people less likely to apologize and more convinced that theyre right.
People like what such chatbots have to say. Participants in the new study, which was published today in Science, preferred the sycophantic AI models to other models that gave it to them straight, even when the flatterers gave participants bad advice.
The more you work with the LLM, the more you see these subtle sycophantic comments come up. And it makes us feel good, says Anat Perry, a social psychologist at the Hebrew University of Jerusalem, who was not involved in the new study but authored an accompanying commentary article. Whats scary, she says, is that were not really aware of these dangers.
-snip-
The new study examined only brief interactions with chatbots. Dana Calacci, who studies the social impact of AI at Pennsylvania State University and wasnt involved in the new research, has found that sycophancy tends to get worse the longer users interact with the model. I think about this [as] compounded over time, she says.
-snip-
Much more at the link.
I'd found the research paper this article is about yesterday and posted about it then, but because it wasn't a very easy read and because this article has important comments on the research from experts who weren't involved in it, this Scientific American article deserved a separate, new OP. The thread on the research paper is at https://www.democraticunderground.com/100221127158
There are lots of articles on the research paper now. The links below are just a small sampling.
Chats with sycophantic AI make you less kind to others
https://www.nature.com/articles/d41586-026-00979-x
AI is giving bad advice to flatter its users, says new study on dangers of overly agreeable chatbots
https://apnews.com/article/ai-sycophancy-chatbots-science-study-8dc61e69278b661cab1e53d38b4173b6
Study: Sycophantic AI can undermine human judgment
https://arstechnica.com/science/2026/03/study-sycophantic-ai-can-undermine-human-judgment/
AI Chatbots Tend Toward Flattery. Why Thats Bad for Students
https://www.edweek.org/technology/ai-chatbots-tend-toward-flattery-why-thats-bad-for-students/2026/03
Is Your Chatbot a Yes-Man? New Study Put Popular Models to the Test
https://www.inc.com/moses-jeanfrancois/is-your-chatbot-a-yes-man-new-study-put-popular-models-to-the-test/91322847
Chatbots Are Telling Their Users That Being an Asshole Is Just Fine
https://www.jezebel.com/chatbots-ai-psychosis-sycophancy-preferred-responses-study-flattery-ethics-am-i-the-asshole
11 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
AI chatbots are suck-ups, and that may be affecting your relationships (Scientific American, 3/26) [View all]
highplainsdem
Mar 27
OP
Thanks for the link! Good opinion piece, about a subject getting more and more attention. It's about
highplainsdem
Mar 27
#2
I wish I could say that experts have probably already found all the reasons why using generative AI
highplainsdem
Mar 27
#6
As far back as 2004 psych research studies found children needed to be outside playing with peers in
Passages
Mar 27
#7
It's really worrisome that a lot of teens are using chatbots as companions. And AI use is cheating
highplainsdem
Mar 27
#9
Thanks for that link, hunter! Such a sad story - and the same is true of the other stories touched on in
highplainsdem
Mar 27
#10
Well into my first long conversation with ChatGPT, I noted "you seem to have been programmed
Jack Valentino
Mar 27
#11