Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Judi Lynn

(163,869 posts)
Sun Aug 31, 2025, 06:53 PM Sunday

There are 32 different ways AI can go rogue, scientists say -- from hallucinating answers to a complete misalignment with

There are 32 different ways AI can go rogue, scientists say — from hallucinating answers to a complete misalignment with humanity

By Drew Turney published 13 hours ago

New research has created the first comprehensive effort to categorize all the ways AI can go wrong, with many of those behaviors resembling human psychiatric disorders.


Scientists have suggested that when artificial intelligence (AI) goes rogue and starts to act in ways counter to its intended purpose, it exhibits behaviors that resemble psychopathologies in humans. That's why they have created a new taxonomy of 32 AI dysfunctions so people in a wide variety of fields can understand the risks of building and deploying AI.

In new research, the scientists set out to categorize the risks of AI in straying from its intended path, drawing analogies with human psychology. The result is "Psychopathia Machinalis" — a framework designed to illuminate the pathologies of AI, as well as how we can counter them. These dysfunctions range from hallucinating answers to a complete misalignment with human values and aims.

Created by Nell Watson and Ali Hessami, both AI researchers and members of the Institute of Electrical and Electronics Engineers (IEEE), the project aims to help analyze AI failures and make the engineering of future products safer, and is touted as a tool to help policymakers address AI risks. Watson and Hessami outlined their framework in a study published Aug. 8 in the journal Electronics.

According to the study, Psychopathia Machinalis provides a common understanding of AI behaviors and risks. That way, researchers, developers and policymakers can identify the ways AI can go wrong and define the best ways to mitigate risks based on the type of failure.

More:
https://www.livescience.com/technology/artificial-intelligence/there-are-32-different-ways-ai-can-go-rogue-scientists-say-from-hallucinating-answers-to-a-complete-misalignment-with-humanity
3 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
There are 32 different ways AI can go rogue, scientists say -- from hallucinating answers to a complete misalignment with (Original Post) Judi Lynn Sunday OP
AI is built and deployed by humans jfz9580m Monday #1
Your point is good but malicious AI is possible if it is given independence or control or gains it. Bernardo de La Paz Monday #2
Well that really was my main point jfz9580m Monday #3

jfz9580m

(15,926 posts)
1. AI is built and deployed by humans
Mon Sep 1, 2025, 02:13 AM
Monday

Malicious AI is a bullshit framing to avoid responsibility for irresponsible use of junk ai. And a way to offload it on people who use AI without even knowing that.

I am fully aware as of this date and call bullshit on mal ai. Mal humans otoh…
It’s not even AI. Call it what it is.. junkware thay disrupts human life and sets off a cottage industry of worthless AI associated professions at the cost of real if low reward ones.

Ed Zitron, Matt Stoller, Nathan Robinson and Yasha Levine aside tech criticism much like tech fawning is a worthless junk industry that would not have existed without the fanboys and fangirls. None of those people exploits it.

The most honest article I ever read on police reform in The Jacobin acknowledged that sadly these cancers on human society(any corruption or other hateful crap) launch other cottage industries (which are often redundant and pointless. There’s nothing Haidt ever says that someone didn’t say earlier/better and with fewer fallacies).

Zuboff is insightful but like all ex-fangirls seems incapable yet again of telling charlatans from real critics or real change (my beloved Lina Khan). She probably wouldn’t have had access to see all she saw without that. She was useful up to a point but she blurs many lines. Google and Facebook are not the same. If you don’t use social media cynically (not cynical of humans but of the mode) that’s not about connection. Thats totally different from a tool people used to use as if it is private completely.

Levine is golden. The guy can do no wrong in my eyes. He always reminded me of one of my favorite colleagues. The type of person even a weary cynic like me considers a pillar of society. Well okay I am not a cynic about oblivious randos who I wish would either not be fed junk info nor be exploited. I liked being an oblivious rando ;-/.
Oblivious randos also worry me. Will they like South Park’s Cancelled lean in to the spectacle or be sane and flee..I would recommend fleeing where you can pull it off.

As the type of person who dislikes filing complaints flight usually seems like the best option at first. But that usually means you get dissected and the wrong people (typically other semi oblivious randos) get blamed rather than who I would blame which is the type of person who is bright enough to never let me know who they are.

Sorry..open rumination. I have to file complaints that almost certainly involve AI and I would like to do it inconspicuously on the one hand, but on the open enough web otoh that in the name of further “democratization” theatre more randos are not dragged in as if it is something mysterious rather than the reality that tech is being deployed undemocratically and sleazily even in democracies.

Bernardo de La Paz

(57,992 posts)
2. Your point is good but malicious AI is possible if it is given independence or control or gains it.
Mon Sep 1, 2025, 11:22 AM
Monday

For one thing, people can be malicious, as you note. So saying "humans made it" does not mean it can't be malicious.

Current "AI" uses the same methods as the brain does, but with less sophistication of mechanism and very deficient and narrow training for narrow tasks (making statements that have some "truthiness" ).

People have been writing off AI for decades: It will never play chess, it will never win at chess, it will never beat the best human, it will never diagnose a disease, it will never find protein foldings, it will never find novel metal alloys, .... All failed predictions. Beware of saying "AI will never ____". Fill in the blank.

Well, there is one "never" prediction that can be made: It will never experience life the same way humans do. That's about it. But it can understand that experience.

Yes, it is currently oversold and promises about what the current iteration can do are over-promises. But don't write it off. You ain't seen nothin' yet. The difference between today's AI and 2050's AI is like the difference between 1995 internet and 2020 internet. You ain't seen nothin' yet.

You are 100% right on about it being deployed undemocratically even in democracies.

jfz9580m

(15,926 posts)
3. Well that really was my main point
Mon Sep 1, 2025, 01:48 PM
Monday

I generally am skeptical about a subset of AI scientists because they sound like hucksters.

When the science is above my head, I try to guage the humans selling something and except a few people like Turing, Norbert Weiner and Yan LeCun I generally find AI scientists less credible than your average scientist in the natural sciences or medicine.

I even like Yan LeCun’s work. I downloaded a piece on Jepa and some less technical work to skim when my mind is less preoccupied.

But what I find concerning is this effort to separate mal ai from human accountability. I bet they’ll first illicitly deploy AI without permission and then say that people who didn’t even know they were dealing with such garbage are responsible for “how their behaviors trained the AI”. Wtf?
People aren’t supposed to change their lives around someone else’s rubbish agents and LLMs and voice assistants etc.

And some of it is like Musk’s “simulation theory” ( ), which if not debunked by Zohar Ringel etc highlights the difference between the glib way these fraudulent guys like Bostrom, Musk etc talk and think and how actual scientists painstakingly work to understand reality in mundane ways.

It reminds of my mom comparing Agatha Christie’s Hercule Poirot and his “little grey cells” with which he just “solves problems” without leaving his armchair versus Freeman Wills Croft’s Inspector French and his painstaking, systemic use of police procedure.
https://en.m.wikipedia.org/wiki/Inspector_French]


Latest Discussions»Culture Forums»Science»There are 32 different wa...