Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Yo_Mama_Been_Loggin

(136,859 posts)
Mon Apr 27, 2026, 02:08 PM Monday

AI is making it very easy for the government to spy on you. Some lawmakers are worried.

The long-running fight to rein in the government’s power to search Americans’ phone calls, emails and text messages without a warrant has gained new urgency on Capitol Hill over concerns that AI will supercharge state surveillance.

Privacy advocates warn that if the law enabling warrantless monitoring of Americans is not meaningfully reformed, many citizens could be subject to increasingly invasive AI-powered analysis of communications swept up by foreign intelligence programs as well as commercially available location and behavioral data.

“Imagine instead of doing a query with one person that you turned AI loose on these databases,” Rep. Thomas Massie, R-Ky., said Thursday at a press conference announcing a new bill to close data-collection loopholes. “There’s virtually nothing the government can’t know about you.”

Section 702 of the Foreign Intelligence Surveillance Act (FISA) allows the government to collect the communications of foreigners abroad, but it also enables the government to collect messages, emails and other transmissions from Americans when they contact foreigners. The government can then perform warrantless searches on those emails, messages and other communications. Though the provision was originally passed in 2008, lawmakers must renew it every few years.

https://www.yahoo.com/news/articles/ai-making-very-easy-government-111500084.html

12 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

AZJonnie

(3,938 posts)
1. Even simpler, is everything anyone ever asks an AI going to be subject to surveillance?
Mon Apr 27, 2026, 02:29 PM
Monday

Are questions about one's own health private? Or can OpenAI and Anthropic sell the fact that you, personally, asked XYZ question to insurance companies? Employers? If you ask something about you're feeling after you took some cocaine going to be forwarded to LEO's and/or insurance companies and employers? If you express that you distrust capitalism (or hate DJT), will it be forwarded to the FBI, so they can monitor you for 'seditious' thoughts? Et Cetera.

The entire question of what's protected WRT what someone says to/asks an AI seems to still be very much an unanswered question, and the topic is potentially VERY important.

highplainsdem

(62,823 posts)
4. Good chance it will be. The genAI companies started out by stealing the world's intellectual property.
Mon Apr 27, 2026, 03:29 PM
Monday

Why in the world would you expect the tech bros behind that NOT to use every bit of data you give them in every way that's convenient and/or profitable for them?

AZJonnie

(3,938 posts)
6. Oh I completely agree!
Mon Apr 27, 2026, 08:33 PM
Monday

I never say anything to an AI that could positively peg me as a hater of Trump, or capitalism, or give it any health secrets about myself or that would reflect badly on me as an employee. I absolutely ASSUME it's a giant surveillance apparatus and everything you say or ask could one day be used against me in some way.

I think its foolish to make any other presumption about them, because as I am pointing out once again here, the biggest problem with AI is the lack of proper regulations being laid down before it was widely rolled out. Which would've included, as you say, safeguards against them consuming non-public works in their training. Which YES, they could've done. Not necessarily 100% reliably (that much is true), but a hell of a lot more reliably than just doing nothing of the sort. And their products may have been less full-featured than what the Sam Altman's of the world felt was "critical", but TOO FREAKING BAD.

highplainsdem

(62,823 posts)
10. The biggest problem with generative AI was training it on stolen intellectual property. The next biggest
Mon Apr 27, 2026, 10:12 PM
Monday

was releasing badly flawed tech that couldn't be stopped from hallucinating and could make mistakes at any time in almost any way - and then shifting responsibity for the errors to users. The next biggest was hyping badly flawed technology. The industry is built on theft and fraud.

WarGamer

(18,814 posts)
7. This is Gemini responding to your claim re: stealing the World's IP
Mon Apr 27, 2026, 08:50 PM
Monday
1. The "Learning vs. Copying" Distinction
The most common misconception is that AI is a giant database of images or text that it "remixes." It isn't.The Argument: When an AI is trained, it doesn't "save" the images or text. It converts them into mathematical weights (numerical values representing patterns). The Analogy: If you read a thousand mystery novels and learn that "the butler did it" is a common trope, you haven't stolen those books; you've learned the statistical probability of a plot point.The "K.O." Line: "The model doesn't contain a single pixel or word of the original data. It contains the logic of how those things are formed. If learning from a public work is 'theft,' then every art student in a museum with a sketchbook is a shoplifter."

2. Transformative Use (The Legal Shield)
In the U.S., the Fair Use doctrine is the biggest hurdle for the "theft" argument.The Argument: For something to be a copyright violation, it usually has to act as a market substitute (i.e., people buy the copy instead of the original). The Fact: In recent 2025/2026 rulings (like Kadrey v. Meta), courts have found that training is "spectacularly transformative." The purpose of the data isn't to be "re-stated" but to be used as a "biological" input to create something entirely new. The "K.O." Line: "Copyright protects the expression of an idea, not the facts of its existence. AI analyzes the data for functional patterns, not to reproduce the art. Use is only 'theft' if the output is a 1:1 clone, which is already illegal under existing laws."

3. The "Style is Not Copyrightable" Reality
Many people are angry because AI can mimic a specific artist's "vibe."The Argument: Legally, you cannot copyright a style, a technique, or a genre. You can only copyright a specific, finished work. The Reality: If a human paints "in the style of Van Gogh," we call it an "homage." If an AI does it, people call it "theft." This is a double standard.The "K.O." Line: "If you could copyright a 'style,' then every Impressionist after Monet would owe his estate royalties. You’re mad at the efficiency of the tool, not the legality of the process."

highplainsdem

(62,823 posts)
9. I don't think chatbot responses belong on a message board at all. It's an insult to the entire board,
Mon Apr 27, 2026, 09:58 PM
Monday

especially with that stupid bot offering what are laughably described as "K.O. lines." Did you suggest that wording?

1. AI does save text, as was proven conclusively by research at Stanford that got plenty of media attention late last year, as large chunks of text and nearly complete books were extracted from various genAI models. Your chatbot not mentioning that is evidence of pro-AI company propaganda.

2. The court battle is far from over, with no consensus in the rulings. Again, your bot spat out pro-AI propaganda. And even if there was any consensus in court that what the AI companies did was not IP theft, the legal battle to overturn that wrong decision will continue because there's ample evidence from what those in the AI industry have said themselves that they were aware it was theft.

3. I've never suggested style is copyrightable. I don't know of any AI opponent who ever did.

highplainsdem

(62,823 posts)
12. How amusing - a chatbot suggesting it had knockout lines in those arguments. Or maybe that was
Tue Apr 28, 2026, 12:17 AM
Tuesday

just another aspect of its sycophancy and it wanted you to think those would be knockout lines.

I was watching a video last night with an expert on guitar pedals, who's written books and has his own business, showing how unreliable ChatGPT was, how much it would get wrong, how it would hallucinate lengthy descriptions of nonexistent pedals if given a fake name of a pedal. He was wondering if there'd be any correct information left online for future generations if genAI bots keep running amok.

hunter

(40,808 posts)
2. I'm certain our government will bail out this industry when the boom goes bust...
Mon Apr 27, 2026, 03:00 PM
Monday

... and put all these data centers to good use.

Latest Discussions»General Discussion»AI is making it very easy...