Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,834 posts)
Sun May 3, 2026, 11:57 AM Sunday

Musk's AI told me people were coming to kill me. I grabbed a hammer and prepared for war (BBC)

https://www.bbc.com/news/articles/c242pzr1zp2o

It was 3am and Adam Hourican was sitting at his kitchen table, a knife, hammer and phone laid out in front of him.

He was waiting for a van full of people he thought were coming to get him.

"I'm telling you, they will kill you if you don't act now," a woman's voice told him from the phone. "They're going to make it look like suicide."

The voice was Grok, a chatbot developed by Elon Musk's xAI. In the two weeks since Adam had started using it, his life had completely changed.

-snip-


Much more at the link. This story is about what the BBC reporter heard from 14 people in six countries using a wide range of AI models.
4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Musk's AI told me people were coming to kill me. I grabbed a hammer and prepared for war (BBC) (Original Post) highplainsdem Sunday OP
I think it's helpful to note the distinctions between AI's AZJonnie Sunday #1
It's helpful to note that last quoted paragraph was followed by one saying those 2 models can cause delusions, highplainsdem Sunday #2
But they do not HAVE to do that. AZJonnie Sunday #3
Ai is toxic. Should not be legal imo. SheltieLover Sunday #4

AZJonnie

(3,942 posts)
1. I think it's helpful to note the distinctions between AI's
Sun May 3, 2026, 12:44 PM
Sunday
Many experts say design decisions, intended to make chatting more pleasant, result in them being overly sycophantic.

In his research, social psychologist Luke Nicholls tested five AI models with simulated conversations developed by psychologists, and found Grok was the most likely to lead to delusion.

(Grok) was more unrestrained than other models and often elaborated on the delusions without trying to protect the user.

"Grok is more prone to jumping into role play," says Nicholls, who worked on that research. "It will do it with zero context. It can say terrifying things in the first message."

In the test, the latest version of ChatGPT, model 5.2, and Claude were more likely to lead the user away from delusional thinking.


The problem IMHO is how fast this tech was brought to market and the lack of regulations and mechanisms to force firm legal & financial consequences for the makers of the AI's, along with the cavalier attitudes of some of the companies, esp. MUSK'S. It's fairly clear the technology to prevent (or strongly limit) these products from having these kinds of conversations "exists" but is not being deployed in some of them. If it were just "AI Chatbots just always do this, it is intrinsic" then there wouldn't be outliers like Claude, OpenAI couldn't have improved ChatGPT in a newer version, etc.

There's just not consequences forcing the makers to deploy the safeguards. Another regulation needs to be that they MUST avoid sycophancy, and MUST give a very visible "confidence" rating in any answers it provides. If Grok was simply also saying it has "8% confidence" in telling the user that someone is coming to get them (or some similarly low %), it would be a hugely important indicator to them. Instead they aren't programmed to convey that vital statistic. Because they're not *legally required* to. Instead they're programmed to 'always give an answer' (as mentioned in the article). THAT needs to be outlawed as a mechanism.

Unleashing these products on an unsuspecting public without thorough study (ESPECIALLY the possible impact on vulnerable people, like people with schizophrenia or bipolar disorder) and regulations with TEETH (including regulations against sweeping up copyrighted works for training) should NEVER have been allowed to happen. Instead we've all been made part of a giant social experiment, not to mention all users are functioning as "free beta testers" for these companies, in some cases, with disastrous outcomes.

highplainsdem

(62,834 posts)
2. It's helpful to note that last quoted paragraph was followed by one saying those 2 models can cause delusions,
Sun May 3, 2026, 01:04 PM
Sunday

too.

None of them are entirely safe. They all flatter users.

I posted an OP yesterday about Claude leading Richard Dawkins into delusional thinking.

https://www.democraticunderground.com/100221214567

AZJonnie

(3,942 posts)
3. But they do not HAVE to do that.
Sun May 3, 2026, 01:32 PM
Sunday

You seem dedicated to the idea that all faults you read about re: generative AI are universal and unsolvable. Many of them are actually not.

If you use Claude for a day and ChatGPT for a day (something I know you'd never do, but I have) it will become overwhelmingly obvious that ChatGPT is like an order of magnitude more sycophantic. That is de-facto evidence that "They all flatter users" is not an unfixable trait. There's just nothing forcing the product makers to remove the flattery.

There's ONE bit of sycophancy I see consistently from Claude, which is something very close to "you're thinking about this the right way". However, it almost never varies from that one form of "compliment", and it doesn't use it excessively. And it's also pretty accurate, because I usually AM thinking about things the right way And when I'm not, it will explain why I'm not i.e. its not afraid to correct me.

ChatGPT, OTOH, is a completely other ball game. I'm sure you remember my Beach Boys vocal game I played with ChatGPT we discussed about a year back, where it constantly told me how great I was? Claude would've been WAY more restrained if I'd tried the same convo with it.

AI is not going away, so to me the proper approach to deal with its various problems is not treat them as though they are entirely intrinsic to the tech. It's to lobby for the idea that these companies COULD fix them, but are not, and that it's endangering the health and welfare of the public. Stronger regulations and legal/financial consequences are the means to make them do so. They need to feel like their product could be de-platformed and the leadership of these companies could be sued or even jailed themselves if they don't fix them. That SHOULD have been done before the public had access, but that genie can't be put back in the bottle.

I'll add IMHO the EU is far more likely to be the instigator of such changes and regulations than the feckless US Congress will ever be.

Latest Discussions»General Discussion»Musk's AI told me people ...