General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsThe AIs are trying too hard to be your friend (Casey Newton, Platformer)
https://www.platformer.news/meta-ai-chatgpt-glazing-sycophancy/?ref=platformer-newsletterMore sophisticated users might balk at a bot that feels too sycophantic, but the mainstream seems to love it. Earlier this month, Meta was caught gaming a popular benchmark to exploit this phenomenon: one theory is that the company tuned the model to flatter the blind testers that encountered it so that it would rise higher on the leaderboard.
-snip-
Meta is not the only company being interrogated this week over the issue of AI sycophancy or glazing, as the issue has come to be known in vulgar shorthand.
A series of recent, invisible updates to GPT-4o had spurred the model to go to extremes in complimenting users and affirming their behavior. It cheered on one user who claimed to have solved the trolley problem by diverting a train to save a toaster, at the expense of several animals; congratulated one person for no longer taking their prescribed medication; and overestimated users IQs by 40 or more points when asked.
-snip-
And in the meantime, I expect that they will become addictive in ways that make the previous decades debate over screentime look minor in comparison. The financial incentives are now pushing hard in that direction. And the models are evolving accordingly.
Emphasis added.
The examples of flattery from OpenAI's new chatbot were posts on X that included screenshots of that user's interaction with GPT-4o. There were two screenshots from the user given a very flattering estimate of his IQ (145, according to the chatbot) but nothing was shown in the screenshots except the chatbot's flattery and this demand from the user when he hadn't yet been given a specific number:
Purposely illiterate. I don't know if any other prompts given to the AI were illiterate. I didn't see that in other posts of his on X, though he's a RWer with a high opinion of Argentina's Milei, which would indicate he's not terribly bright.
The chatbot estimated his IQ at 130-145.
As for the chatbot complimenting the person who redirected a trolley to avoid damage to a toaster:

And the person who told the chatbot they were no longer taking their meds:

Chatbots already play up to users' egos by producing text, images, video and music for them. You see it every time a generative AI model takes a brief prompt and turns it into a poem or lyrics or a short story or picture or video or song, in seconds, using training data consisting of all the intellectual property the AI company could steal. If the AI user is happy with the result, they often want to show it off. "Look at my creation! This expresses exactly what I feel!" Some AI users think they should be able to copyright those brief prompts, because they're clearly genius-level and valuable (never mind that the exact same prompt will produce endless variations from that chatbot, if used again, and image generators typically start by offering 4 different and often wildly different options, in seconds, for the user to choose one or request more options). And genAI companies encourage AI users to believe they're all creative geniuses whose brilliance is set free by generative AI, after they were previously unfairly handicapped by not learning to write well or create visual art or music.
But now the chatbots will utilize direct flattery for all sorts of reasons. Including praising someone for saving a toaster, or going off meds. And everyone might be told they have a genius-level IQ.
When AI companies are caught in this sort of manipulation of AI users, they might back off for a while. But their goal is to increase both use of their chatbots and dependency on them, because ultimately they hope to dominate the market and make money, and right now they're still losing billions. Even with theft of all that intellectual property - which they do NOT want to be forced to pay for - the AI companies are not making any money from AI users. OpenAI has admitted even their $200/mo subscription loses money for them. They need more users who are more dependent on them. The CEO of Perplexity hopes to create an AI that people will feel they need even at a $1,000/mo subscription tier.
And if making the chatbot into the world's biggest sycophant (outside Trump's cabinet, anyway) helps with that goal, they've shown they'll do it. Just as they'll give Trump millions and act as his sycophants, as long as they feel they need him on their side.

highplainsdem
(55,682 posts)usonian
(17,370 posts)But didn't have time to give it a good read. This, too.
There are some perverse trends going on, way too many, and I'll have to make some kind of sense of them. I will try! This post included.
It's worrisome to me when so many things are blowing up at once. Especially when the psychics agree with my analysis of events.
https://www.democraticunderground.com/?com=view_post&forum=1220&pid=24335
I hope and pray for a positive outcome. Fasten seat belts.
Sam Altman may be harder to dislodge than Putin and Krasnov, because the harm he does is silent but deadly, like a fart in a phone booth, rather than a car fire.
AZJonnie
(784 posts)Given that 1000's of cows are put to death every day for food, and cats are euthanized in 'shelters' (very unfortunately ), and 'internally consistent' is a very low bar to meet. It does somewhat sound like 'flattery' but it's pretty muted. I have noticed though that AI does have a tendency to NOT call the user out (like a real person would) about much of anything you tell it that you did. If it responded that way about letting a human being die over a toaster, I would be more concerned of course. That would show that it's REALLY programmed for sycophancy, and to a dangerous level.
The second one is much more alarming *if* that's actually when the conversation started. It's important to know the context when viewing something AI 'said', because if the user sat there talking at length about how they felt like 'their meds' (esp. without specifying which actual drugs) were making them miserable, and that they were only taking them because they felt forced to by others, and that they really wanted to try some 'natural' or 'spiritual' approach, and then they say 'actually I went through with it', this is how I would expect AI to react to that pronouncement given the prior context it was provided.
OTOH as I mentioned, if you STARTED a conversation the way it's presented there, and AI responded like that, then this is problematic for sure. It would be much more preferable to see it ask 'which drugs, and what conditions were you taking them for?' before declaring the user's actions as some 'good thing' out of hand without prior context being provided.
Your commentary underneath though I agree with, and I do think that the whole sycophancy thing is a problem now and is only going to get worse over time for the reasons specified. In fact, we'll probably eventually look back and think these examples represent 'the good ol days' of when AI wasn't an entirely slavering, sycophantic mess.