Millions of AI-generated texts are overwhelming courts, city councils, and businesses, and the problem is not the technology, but the unmanageable volume

Image Autor
Published On: March 9, 2026 at 10:15 AM
Follow Us
Researcher reviewing large volumes of AI-generated documents on a computer screen as institutions struggle with the surge of automated writing.

What happens when writing becomes almost effortless? Not just for a few people, but for millions. Courts, academic journals, city councils, and even social media platforms are now grappling with a tidal wave of documents produced by generative artificial intelligence. And the people on the receiving end simply cannot keep up.

In 2023, the respected science fiction magazine Clarkesworld temporarily stopped accepting submissions after being inundated with AI-generated stories. Editors realized that many authors had copied the magazine’s detailed guidelines into an AI system and sent back the results. They were not alone. Other literary and academic publications have reported similar surges.

This is no small glitch. It is a structural shift. For decades, the difficulty of writing and research acted as a natural filter. Drafting a legal brief, a scientific article, or even a letter to the editor required time and cognitive effort.

Now, AI tools can generate thousands of polished documents in minutes. The result is a bottleneck. Judges, reviewers, professors, and public officials face more material than they can reasonably process.

And it is happening everywhere. Newspapers report floods of AI-written opinion letters. Academic conferences receive research papers generated partly or entirely by machines. Legislators are overwhelmed by public comments that may or may not reflect real constituents. Courts, especially in cases involving self-represented litigants, are seeing a rise in AI-drafted filings.

Employers use AI to screen resumes that were themselves polished by AI. In the energy world, it is the same arms race behind data-hungry systems like the ones discussed in “First AI wind turbine triple power yield”, because more models still mean more servers, more power, and more pressure to automate. It is an arms race.

When AI helps science and when it harms

On one hand, AI has clear benefits. In scientific research, it already plays a major role in reviewing literature, analyzing data, and even writing code. For researchers whose first language is not English, AI writing tools can level the playing field. In the past, well-funded teams could hire professional editors. Now, that assistance is widely accessible.

You can see the same acceleration story in other fields too, including materials science, where AI helped researchers move faster in work like “Not of this world — Over 2 million are discovered on U.S. soil and counting”.

That sounds like progress. And to a large extent, it is. A scientist who uses AI carefully, with transparency, to improve clarity is not necessarily undermining science.

The trouble begins when AI fabricates references, invents data, or produces meaningless phrases that slip into peer-reviewed journals. Reviewers increasingly rely on AI tools to detect or evaluate submissions that may have been written by AI in the first place. One machine checking another.

It is efficient. But also fragile. In fiction publishing, some editors may choose to accept AI-assisted work under clear guidelines. Others may try to ban it entirely. Yet distinguishing human from machine writing is becoming harder by the month.

Publications that want only human-created content may end up limiting submissions to trusted authors. That changes who gets a voice. And readers may never know.

Democracy, job markets, and the power imbalance

The issue extends beyond academia. Using AI to polish a resume or draft a cover letter is not fundamentally different from hiring a career coach. Wealthy applicants have long paid for professional editing services.

In that sense, AI can democratize access. The bigger question is scale, especially as some researchers and companies chase a future that is hyped as arriving fast in stories like “Just 12 months away from AI singularity?”.

But there is a line. If AI tools are used to fabricate credentials or to cheat during interviews, that crosses into fraud.

The same dynamic plays out in democratic systems. Citizens have the right to communicate with their representatives. If AI helps someone express complex policy concerns more clearly, that can strengthen participation.

Yet large interest groups can also use AI to generate thousands of fake grassroots messages, creating the illusion of widespread public support. This tactic, sometimes called “astroturfing,” existed before AI. Now it is cheaper and faster.

What separates the good from the bad is not the technology itself. It is power.

AI can distribute cognitive tools more broadly. Or it can amplify the reach of those who already dominate the conversation. The difference depends on how institutions respond. In practical terms, that pressure shows up in infrastructure too, including the energy and water footprint tied to big digital projects like “50 GW of hydrogen to power the Internet”.

Institutions under pressure

The core problem is volume. More submissions. More applications. More comments. More filings. The humans reviewing them do not multiply at the same speed.

Some institutions are deploying defensive AI systems to filter, sort, and prioritize incoming material. Courts use software to manage case loads. Social media platforms rely on automated moderation. Academic reviewers experiment with AI-based screening tools.

These solutions may help. But they also deepen the technological loop. AI generates the content. AI filters the content. Humans supervise, if they can. And in the background, the same AI boom is also driving demand for energy, which is why readers keep seeing tech and climate collide in pieces like “Google finds energy for millennia”.

There is no realistic way to “turn off” this technology. Advanced models can run on personal computers. Ethical guidelines and professional standards will matter for those acting in good faith. They will not stop determined fraud.

So the challenge is balance. How do we preserve the benefits of wider access to writing and analytical tools while limiting deception and overload?

At the end of the day, this is about institutional resilience. Courts must remain functional. Scientific publishing must reward genuine discovery. Democratic systems must reflect real voices.

The flood is here. The question is whether our systems can adapt without losing their foundations. The official analysis was published on Schneier on Security.


Image Autor

ECONEWS

The editorial team at ECOticias.com (El Periódico Verde) is made up of journalists specializing in environmental issues: nature and biodiversity, renewable energy, CO₂ emissions, climate change, sustainability, waste management and recycling, organic food, and healthy lifestyles.

Leave a Comment