The AI industry has a big Chicken Little problem (Mashable - and yes, this is about Matt "Chicken Little" Shumer [View all]
Last edited Thu Feb 12, 2026, 01:54 AM - Edit history (1)
https://mashable.com/article/viral-something-big-is-coming-essay-artificial-intelligence-warning
-snip-
At the same time, it's a running joke in the tech world that you can already find an app for everything. ("There's an app for that." ) That means coding models can model their work off tens of thousands of existing applications. Is the world really going to be irrevocably changed because we now have the ability to create new apps more quickly?
Let's look at the legal claim, where Shumer says that AI is "like having a team of [lawyers] available instantly." There's just one problem: Lawyers all over the country are getting censured for actually using AI. A lawyer tracking AI hallucinations in the legal profession found 912 documented cases so far.
It's hard to swallow warnings about AGI when even the most advanced LLMs are still completely incapable of fact-checking. According to OpenAI's own documentation, its latest model, GPT-5.2, has a hallucination rate of 10.9 percent. Even when given access to the internet to check its work, it still hallucinates 5.8 percent of the time. Would you trust a person that only hallucinates six percent of the time?
Yes, it's possible that a rapid leap forward is imminent. But it's also possible that the AI industry will rapidly reach a point of diminishing returns. And there are good reasons to believe the latter is likely. This week, OpenAI introduced ads into ChatGPT, a tactic it previously called a "last resort." OpenAI is also rolling out a new "ChatGPT adult" mode to let people engage in erotic roleplay with Chat. That's hardly the behavior of a company that's about to unleash AI super-intelligence onto an unsuspecting world.
-snip-