You seem dedicated to the idea that all faults you read about re: generative AI are universal and unsolvable. Many of them are actually not.
If you use Claude for a day and ChatGPT for a day (something I know you'd never do, but I have) it will become overwhelmingly obvious that ChatGPT is like an order of magnitude more sycophantic. That is de-facto evidence that "They all flatter users" is not an unfixable trait. There's just nothing forcing the product makers to remove the flattery.
There's ONE bit of sycophancy I see consistently from Claude, which is something very close to "you're thinking about this the right way". However, it almost never varies from that one form of "compliment", and it doesn't use it excessively. And it's also pretty accurate, because I usually AM thinking about things the right way
And when I'm not, it will explain why I'm not i.e. its not afraid to correct me.
ChatGPT, OTOH, is a completely other ball game. I'm sure you remember my Beach Boys vocal game I played with ChatGPT we discussed about a year back, where it constantly told me how great I was? Claude would've been WAY more restrained if I'd tried the same convo with it.
AI is not going away, so to me the proper approach to deal with its various problems is not treat them as though they are entirely intrinsic to the tech. It's to lobby for the idea that these companies COULD fix them, but are not, and that it's endangering the health and welfare of the public. Stronger regulations and legal/financial consequences are the means to make them do so. They need to feel like their product could be de-platformed and the leadership of these companies could be sued or even jailed themselves if they don't fix them. That SHOULD have been done before the public had access, but that genie can't be put back in the bottle.
I'll add IMHO the EU is far more likely to be the instigator of such changes and regulations than the feckless US Congress will ever be.