50+ sources. Zero spin.
Cross-referenced, unbiased news. Both sides of every story.
AI-Generated Junk Science Is Flooding Academic Journals — and Peer Review Can't Keep Up

Peer Review Breaking Under AI Paper Flood
A new peer-reviewed study published in the journal Science, conducted by researchers at UC Berkeley Haas and Cornell University, examined more than 2 million papers uploaded to arXiv, bioRxiv, and the Social Science Research Network between January 2018 and June 2024.
AI tools are helping researchers publish more papers, faster. The problem lies in what happens next.
UC Berkeley Haas associate professor Mathijs De Vaan — one of the study's co-authors — described the effect bluntly.
"The use of AI by scientists is stressing the system. It's creating a giant bottleneck and making it really hard for evaluators to keep up," De Vaan told UC Berkeley Haas News. "This could affect decisions about what science we should support and fund."
Funding and policy decisions are increasingly built on research that may contain significant errors. The study's co-authors include Cornell's Paul Ginsparg — the founder of arXiv — and data scientists Keigo Kusumegi, Xinyu Yang, and Yian Yin.
The Paper Mill Operation
The Verge's Joshua Dzieza reported on a case that illustrates the scale of the problem.
University of Zurich postdoctoral researcher Peter Degen investigated why a 2017 epidemiology paper by his supervisor was suddenly being cited hundreds of times — multiple times per week.
Degen traced the citations to papers all pulling from the same public dataset — the Global Burden of Disease study compiled by the Institute for Health Metrics and Evaluation at the University of Washington. The citing papers used it to mass-produce predictive studies on stroke risk, testicular cancer, elderly falls in China, and dozens of other topics.
GitHub links in the papers led to a company in Guangzhou, China marketing tutorials on Bilibili — China's version of YouTube — showing users how to produce "publishable research" in under two hours using AI writing tools.
Detection Getting Harder
AI-generated papers are becoming harder to detect, not easier. Researchers analyzing a subset of AI-assisted studies about headaches found them "rife with errors and misrepresentations." The errors weren't obvious enough to catch in standard peer review.
"It's a huge burden on the peer-review system, which is already at the limit," Degen said. "There's just too many papers being published and there's not enough peer reviewers, and if the LLMs make it so much easier to mass produce papers, then this will reach a breaking point."
The peer-review system was already strained before ChatGPT launched in late 2022. It was built for a world where producing a research paper took months.
The Funding Problem
Federal science agencies — the NIH, the NSF, and others — spend tens of billions of taxpayer dollars annually funding research. Peer review is the gatekeeper that's supposed to ensure that money goes to legitimate science.
If peer review is breaking down, funding decisions get made based on contaminated literature. Bad science gets cited. Bad citations shape policy. Policy shapes lives.
Journals, funding agencies, and universities need hard rules, not voluntary guidelines. The Berkeley/Cornell study used sophisticated detection algorithms to identify AI-assisted papers, but detection tools are already in an arms race with generation tools — and generation is winning.
Without action, the scientific record gets worse every single day.