Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

AZJonnie

(3,946 posts)
1. My immediate reaction is OMG, the solution is monitoring.
Mon May 4, 2026, 01:07 AM
Monday

But this needs to be done a certain way. Otherwise, we could end up with full-time big-brother government surveillance on AI. That would be bad.

The software needs to cut people the hell off (and way earlier than in this present case). Then it reports them to a human, an employee of the company. The human is then responsible for making a call to authorities (or not), AND their name goes ON IT. Refused or passed along, someONE is responsible, and the company is liable as well for the decision.

But, we can't allow full-time government surveillance to "keep everyone safe". It needs to be kept simple and not controlled by the Feds.

Recommendations

0 members have recommended this reply (displayed in chronological order):

Latest Discussions»General Discussion»ChatGPT Wrestles With Its...»Reply #1