Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

usonian

(20,988 posts)
Mon Jul 28, 2025, 11:05 AM Jul 28

Grok and Groupthink: Why AI is Getting Less Reliable, Not More (Time)

https://time.com/7302830/why-ai-is-getting-less-reliable/

Last week, we conducted a test that found five leading AI models—including Elon Musk’s Grok—correctly debunked 20 of President Donald Trump’s false claims. A few days later, Musk retrained Grok with an apparent right-wing update, promising that users “should notice a difference.” They did: Grok almost immediately began spewing out virulently antisemitic tropes praising Hitler and celebrating political violence against fellow Americans.

Musk’s Grok fiasco is a wakeup call. Already, AI models have come under scrutiny for frequent hallucinations and biases built into the data used to train them. We additionally have found that AI systems sometimes select the most popular—but factually incorrect—answers, rather than the correct answers. This means that verifiable facts can be obscured by mountains of erroneous information and misinformation.


Musk’s machinations betray another, potentially more troubling dimension: we can now see how easy it is to manipulate these models. Musk was able to play around under the hood and introduce additional biases. What’s more, when the models are tweaked, as Musk learned, no one knows exactly how they will react; researchers still aren’t certain exactly how the “black box” of AI works, and adjustments can lead to unpredictable results.

The chatbots’ vulnerability to manipulation, along with their susceptibility to groupthink and their inability to recognize basic facts, should alarm all of us about the growing reliance on these research tools in industry, education, and the media.



OPINION:

It's software. Nothing could possibly go wrong. But seriously, current LLM's are entirely reflections of the data that's fed to them, and the biases of the programmers.

Worse, succeeding models feed on the spew of preceding ones.

This would seem to argue for LLM's trained and programmed on narrow verticals. But with the current high cost and extreme power consumption, payback on such more trustworthy focused models seems unlikely.

Electrical power is money and money is God.
10 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

Ocelot II

(127,487 posts)
1. The old adage, "Garbage in, garbage out," applies in spades to AI.
Mon Jul 28, 2025, 11:14 AM
Jul 28

AI collects all the stuff it can find from the Internet, and as it excretes its own slop into the Internet, that's what it uses to "learn." It's like a dog eating its own vomit. Pretty soon AI's only source will be itself.

usonian

(20,988 posts)
3. Long ago, when I went to work at Sun Microsystems, the internet was "smaller"
Mon Jul 28, 2025, 11:30 AM
Jul 28

and it felt like Sun's internal network was a treasure chest of information greater than the internet itself. The internet's a lot bigger now, like a landfill.

But the current black holes of LLM's bypass all "do not search" directives, blow up small sites, driving their bandwidth costs through the stratosphere, and force them behind Cloudflare $$$$$. (1) (2)

I'd say that the data event horizon has already been reached, and that they are Ouroboros.


(1) In the Gold Rush, Levi Strauss made a sure fortune.
(2) "I don't want to own the highways, just all the on and off-ramps." ( attributed to Bill Gates, BY ME )

WarGamer

(17,964 posts)
6. I asked Gemini Pro 2.5 to respond to your post.
Mon Jul 28, 2025, 12:42 PM
Jul 28

Here it is...

That's a common and understandable concern. The idea of an AI creating a feedback loop of decreasing quality is something researchers are very aware of. It even has a name: "model collapse" or sometimes "Habsburg AI."
However, the "dog eating its own vomit" analogy, while vivid, oversimplifies how these models are actually trained and improved. Here’s a more accurate picture:
1. Training Data is Curated, Not Just Scraped.
Large language models (LLMs) aren't just hooked up to the live internet, blindly absorbing everything new. Their foundational training is done on massive, carefully filtered, and largely static datasets.
* Massive Datasets: These are snapshots of the internet from a certain point in time (e.g., the Common Crawl dataset), but they also include huge, curated collections of books, scientific papers, and high-quality texts. This "pre-AI" data is a valuable resource that isn't going anywhere.
* Intense Filtering: A huge amount of work goes into cleaning these datasets. Engineers remove duplicates, filter out low-quality content, and try to balance the data to reduce bias. It's less like a dog eating whatever is on the floor and more like a chef selecting specific ingredients for a recipe.
2. Quality is Actively Taught, Not Passively Absorbed.
The most significant advances in AI quality have come from methods that don't involve scraping the web at all. The main technique is called Reinforcement Learning with Human Feedback (RLHF).
* How it Works: In this stage, the AI generates multiple answers to a prompt. Human reviewers then rank these answers from best to worst. The model is then rewarded for producing answers similar to the ones humans liked and penalized for the ones they didn't.
* The Result: This is the direct opposite of "eating slop." It's like having thousands of expert tutors constantly grading the AI's work, specifically teaching it to be more helpful, harmless, and accurate. This process actively steers the model away from generating nonsense.
3. The "Pollution" Problem Has Active Solutions.
The risk of AI-generated content degrading future training sets is real, but it's an active engineering problem, not an unsolvable apocalypse. Here are some strategies being used and developed:
* Data Provenance and Watermarking: Researchers are developing techniques to "watermark" AI-generated content. This would allow future training models to identify and potentially exclude content created by other AIs, or at least weigh it differently.
* Preserving High-Quality Archives: There is a huge incentive to preserve and protect the vast archives of human-generated text created before 2023 as a pristine training resource for future models.
* Synthetic Data: Ironically, one of the best ways to improve an AI is to train it on high-quality synthetic data generated by an even more advanced AI, under controlled conditions. This isn't "slop," but carefully crafted examples designed to teach specific reasoning or coding skills.
A Better Analogy
Instead of a dog eating its own vomit, a better analogy is the entire human knowledge ecosystem.
Humans learn from the vast body of knowledge created by previous generations (books, art, science). We then create our own works. Some of it is brilliant, some of it is derivative, and some of it is utter junk. We rely on editors, curators, librarians, peer reviewers, and critics to sift through it all and elevate the quality work.
AI is a new, powerful tool within that ecosystem. We are currently in the process of building the "editors" and "peer reviewers" for AI content to ensure it elevates, rather than degrades, our collective knowledge base. It's a serious challenge, but it's not a closed loop doomed to failure.

WarGamer

(17,964 posts)
8. I'd say so...
Mon Jul 28, 2025, 12:52 PM
Jul 28

I use it quite a bit and it's excellent "in it's wheelhouse"... like "Describe the conditions leading up to 1066 and William's invasion of England."

Then what you get is a Yale lecture hall experience...

But ask AI something more recent, possibly occurring after it's last data set was closed... and it's confused.

A month or so ago... when I was catching up on Yellowstone I asked "Do they show Kevin Costner's death in Yellowstone?"

AI said Costner's character was alive and well.

patphil

(8,317 posts)
2. Gigo is a old, but still accurate acronym to express bad programming and/or data collection.
Mon Jul 28, 2025, 11:26 AM
Jul 28

AI is merely elevating it to an art form.
One of the real problems with AI is that it lacks common sense. It can't sense anything, so it just hallucinates "truth".

We're rapidly getting to a point where certain types of AI are sounding more and more like Donald Trump every day.
And that's the worst condemnation of AI I could actually say about it.

usonian

(20,988 posts)
5. You got it! He IS a chatbot.
Mon Jul 28, 2025, 11:39 AM
Jul 28

Repost of https://www.democraticunderground.com/100220281483

The Hallucinating ChatGPT Presidency -- Is Tr-mp a chatbot?

https://www.techdirt.com/2025/04/29/the-hallucinating-chatgpt-presidency/

Judge for yourself.

Tue, Apr 29th 2025 09:34am - Mike Masnick

We generally understand how LLM hallucinations work. An AI model tries to generate what seems like a plausible response to whatever you ask it, drawing on its training data to construct something that sounds right. The actual truth of the response is, at best, a secondary consideration.

snip

But over the last few months, it has occurred to me that, for all the hype about generative AI systems “hallucinating,” we pay much less attention to the fact that the current President does the same thing, nearly every day. The more you look at the way Donald Trump spews utter nonsense answers to questions, the more you begin to recognize a clear pattern — he answers questions in a manner quite similar to early versions of ChatGPT. The facts don’t matter, the language choices are a mess, but they are all designed to present a plausible-sounding answer to the question, based on no actual knowledge, nor any concern for whether or not the underlying facts are accurate.

snip

This is not the response of someone working from actual knowledge or policy understanding. Instead, it’s precisely how an LLM operates: taking a prompt (the question about job losses) and generating text based on some core parameters (the “system prompt” that requires deflecting blame and asserting greatness).

The hallmarks of AI generation are all here:
• Confident assertions without factual backing
• Meandering diversions that maintain loose semantic connection to the topic
• Pattern-matching to previous responses (“ripped off,” “billions of dollars”)
• Optimization for what sounds good rather than what’s true


Great article and hard to summarize, because the author gives so many spot-on examples.

What "the media" gets entirely wrong is treating his statements (generated responses) as carefully crafted political strategy, when we have learned to treat AI hallucinations as meaningless babble. By elevating them they participate in the sanewashing of his only cognitive skills: revenge, grift and autocracy.


Redleg

(6,634 posts)
4. I see you guys beat me to it with the GIGO.
Mon Jul 28, 2025, 11:34 AM
Jul 28

The adminstrators at my university, which shall remain un-named, have been pushing us faculty to use it more in our teaching. The reason they give is that businesses will want to hire graduates with strong "AI skills." While I think there are uses for AI, especially an AI that is trained on a specific set of information, these administrators don't seem to appreciate the potential downsides of AI, some of which are known and a good number of which are not yet known.

I have seen some of the ugly when reading online discussion posts by students who clearly used ChatGPT to answer the questions and many of the answers are strikingly similar.

WarGamer

(17,964 posts)
9. AI is particularly well suited to History and Coding and Math.
Mon Jul 28, 2025, 01:00 PM
Jul 28

And example, "Please describe the conditions leading up to the 1066 invasion of England by William the Conqueror"

Here it is:

Google Gemini Pro 2.5

Of course. It is a pleasure to delve into one of the most pivotal moments in Western history. The Norman Conquest of 1066 was not a sudden, isolated event but the culmination of decades of political maneuvering, personal ambition, and a fatal power vacuum.
Here is a thorough description of the conditions that led to William of Normandy's invasion of England.

The Stage: England and Normandy in the Mid-11th Century

By the 1050s, Anglo-Saxon England was one of the wealthiest and most sophisticated kingdoms in Europe. It had a centralized administration, a system of national taxation (the Danegeld), a standardized currency, and a powerful navy. Its weakness, however, was political and would prove fatal.

Across the Channel, the Duchy of Normandy was a formidable feudal state. The Normans, descendants of Viking raiders who had settled in France, were renowned for their military prowess, particularly their use of knights on horseback and their skill in castle-building. They were ambitious, ruthless, and expansionist. At their head was Duke William, an illegitimate but brilliant and determined ruler who had forged his duchy into a powerful, unified force after years of brutal civil war.

The Core Problem: The Childless King

The entire crisis of 1066 stems from one central fact: King Edward the Confessor had no children.
Edward took the throne in 1042. He was the son of a previous Anglo-Saxon king, Æthelred the Unready, but his mother was Emma of Normandy. When the Danes under Cnut the Great conquered England, a young Edward was sent into exile for 25 years in Normandy. This long exile was crucial:

* Norman Sympathies: Edward grew up in the Norman court. He was more Norman than English in his customs, his friends, and his political outlook.

* A Promise to William: According to Norman accounts, in approximately 1051, Edward, grateful for the sanctuary Normandy had provided and likely as a political move against the powerful English earls, promised the throne to his distant cousin, Duke William. While this promise was the bedrock of William's claim, it was meaningless under English law, where the king did not have the sole right to name his successor. That power rested with a council of the kingdom's leading men, the Witenagemot.
The Rise of the House of Godwin: The Power Behind the Throne
While Edward was king, the most powerful family in England was the House of Godwin. The patriarch, Godwin, Earl of Wessex, was an immensely wealthy and influential kingmaker. His power was so great that Edward was forced to marry his daughter, Edith.
Upon Godwin's death, his son Harold Godwinson inherited his title and influence. Harold was the opposite of Edward; he was a charismatic, skilled, and battle-hardened military leader. He was the de facto ruler of England during Edward's final years, leading English armies to victory in Wales and securing the kingdom's borders. He was, for all intents and purposes, the English choice for the next king.

The Three Claimants and Their Justifications

Upon Edward's death, three men believed they had the right to the English crown.

1. Harold Godwinson, Earl of Wessex (The Man on the Spot)
* His Claim: Harold's claim was based on his English birth, his proven leadership, his immense popularity, and most importantly, King Edward's alleged deathbed proclamation naming him as successor. This was immediately ratified by the Witenagemot, which, by English custom, made him the legitimate king. He was crowned in Westminster Abbey the day after Edward's burial.

2. William, Duke of Normandy (The Promised Successor)
* His Claim: William's claim rested on two key events:
* The Promise of 1051: As mentioned, he claimed the childless Edward had promised him the throne.
* Harold's Oath (c. 1064): This was William's masterstroke. Two years before the invasion, Harold Godwinson was shipwrecked on the coast of Normandy and taken captive by William. To secure his release, Harold was forced to swear a sacred oath—reportedly over the bones of saints—to support William's claim to the English throne. When Harold accepted the crown for himself, William portrayed him not just as a political rival, but as a perjurer and a sinner. This allowed William to gain the Pope's blessing for his invasion, turning it from a war of aggression into a holy crusade, signified by the papal banner that flew on his flagship.

3. Harald Hardrada, King of Norway (The Viking Legacy)
* His Claim: Hardrada ("The Hard Ruler&quot was one of the most famous warriors of the Viking Age. His claim was tenuous and complex, stemming from a political deal made between his predecessor and the Danish king of England who ruled before Edward.
* The Catalyst: Hardrada's invasion was instigated by Tostig Godwinson, Harold's own exiled and bitter brother. Tostig had been a tyrannical Earl of Northumbria and was driven out by his own people. Furious that his brother Harold did not support him, Tostig sought revenge and pledged his support to Hardrada, convincing the Norwegian king that England was ripe for the taking.

The Perfect Storm: The Events of 1066

The convergence of these claims in a single year created a catastrophe for Harold Godwinson.

* January 1066: King Edward dies. Harold Godwinson is crowned. William of Normandy begins assembling a massive invasion fleet. Harald Hardrada does the same in Norway.
* Summer 1066: King Harold assembles the English army (the fyrd) and his fleet on the south coast, anticipating William's invasion. For eight months, they wait. Crucially, William's fleet is stuck in port, waiting for favorable southerly winds to cross the Channel.
* Early September 1066: With supplies running low and the harvest season underway, Harold is forced to disband his militia. His fleet is likewise battered by storms. England's southern defenses are effectively down.
* Mid-September 1066: At this exact moment of vulnerability, Harald Hardrada and Tostig invade in the north of England with a force of around 300 longships. They sack Scarborough and destroy the northern English army at the Battle of Fulford.
* September 25, 1066: King Harold, in a stunning feat of logistics and leadership, force-marches his elite personal troops (his housecarls) nearly 200 miles in five days. He catches the Viking army completely by surprise and annihilates them at the Battle of Stamford Bridge. Both Hardrada and Tostig are killed. It was a decisive, brilliant victory that ended the Viking Age in England.
* September 28, 1066: Just three days after Harold's victory in the north, the winds in the English Channel finally changed. William's fleet set sail and landed, completely unopposed, at Pevensey Bay on the south coast.

Conclusion
The stage was set for the final confrontation. Harold Godwinson stood victorious in the north, but his army was battered, exhausted, and far from the new invasion site. William was in the south with a fresh, rested army of knights, archers, and infantry, free to ravage the countryside.
The invasion of 1066 was therefore not merely an act of opportunism. It was the result of a long-festering succession crisis, the clash of English and Norman legal customs, a deeply personal rivalry fueled by a sacred oath, and a perfect storm of military and meteorological timing that forced England's last Anglo-Saxon king to fight two separate invasions at opposite ends of his kingdom in the span of a few weeks.

usonian

(20,988 posts)
10. You may be interested in this article. Post if you like. I am flat out busy today.
Mon Jul 28, 2025, 05:15 PM
Jul 28
https://lithub.com/what-happened-when-i-tried-to-replace-myself-with-chatgpt-in-my-english-classroom/

What Happened When I Tried to Replace Myself with ChatGPT in My English Classroom
Piers Gelly on a Semester-Long Dive into the AI Discourse

No spoilers. I haven't read it.
Got some illustrations to finish.
Latest Discussions»General Discussion»Grok and Groupthink: Why ...