Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

General Discussion

Showing Original Post only (View all)

hunter

(40,638 posts)
Thu Mar 12, 2026, 12:25 AM Thursday

AI auto-complete may subtly shape views on social issues [View all]

Using AI to auto-complete written communications may be tempting. But the large language models may also auto-complete thoughts, researchers report March 11 in Science Advances.

Few people realize that generative AI chatbots are pushing them to think a certain way, says information scientist Mor Naaman of Cornell University. “It’s the subtlest of manipulations.”

Such manipulation may not matter much when letting AI agents such as ChatGPT and Claude auto-complete a banal email. But when people use an AI’s auto-complete function to opine on weightier societal matters, such as whether or not standardized testing should be used in education, the death penalty should be illegal or felons should be allowed to vote — three issues explored in the study — then the model’s bias can have significant societal impact. Large swaths of people using the same biased model could sway an entire population’s position on a given policy or politician. To flip a single election’s outcome, “you only need 20,000 people in Pennsylvania,” Naaman says.

-- more --

https://www.sciencenews.org/article/ai-autocomplete-social-issues-views


The language we use is important. Even when the words "sound right" that doesn't mean they are true. It's bad enough when we let our own language do our thinking for us, it's worse when we let an AI do it, especially when that AI does not represent our own best interests.

This is an even greater issue with logographic written languages where autocomplete is an intrinsic part of the electronic text entry process.

It's always bad news when we let machines do our thinking for us. It's especially bad when someone with ulterior motives controls those machines.
9 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Latest Discussions»General Discussion»AI auto-complete may subt...