The Real Reason Science Is Broken
AI productivity tools can’t fix bad incentives.
A study published last month in Nature analyzed 41 million research papers across the natural sciences and found something that should unsettle anyone who believes AI will revolutionize scientific discovery. Yes, scientists who adopt AI tools publish three times more papers and receive nearly five times more citations. Their careers accelerate. But the collective range of scientific topics under investigation shrinks by nearly 5 percent, and researchers’ engagement with one another’s work drops by 22 percent. The tools that turbocharge individual scientists appear to be narrowing science as a whole.
This finding doesn’t stand alone. A study published in December in Science examined over two million preprints and found that large language model use is associated with posting 36 to 60 percent more manuscripts. But for LLM-assisted papers, writing complexity is correlated with lower publication probability—the opposite of what’s historically been true. One interpretation: researchers are churning out work of questionable depth dressed up in polished prose. Meanwhile, studies of AI-assisted writing in other domains have documented a “homogenizing” effect. People using AI produce work that may be individually polished, but it is also more similar to the work of other authors. A recent study of over 2,000 college admissions essays found that each additional human-written essay contributed more new ideas to the collective pool than each additional AI-generated essay, and this gap widened as more essays were analyzed.
Together, these findings paint a troubling picture. AI is accelerating the markers of scientific production while potentially degrading both the quality and diversity of what gets produced. More papers, and faster, but fewer breakthroughs.
This paradox crystallizes something that has been building for years within science. AI leaders have promised that these tools will cure cancer, double the human lifespan, compress a century of biology into a decade. These claims have justified massive investment and positioned AI as the savior of science itself. The reality is turning out to be more complicated. AI isn’t accelerating science so much as optimizing scientists to thrive in an already-broken reward system.
I’m a neuroscientist at a biomedical research university, where I research AI and cognition and work with scientists across disciplines on how to communicate their findings. I’ve spent years inside the scientific enterprise and am deeply familiar with its dynamics. The short-term pressures are relentless. Researchers are responsible for raising their own funds through grants with success rates of around 10 percent. That creates enormous pressure to keep producing—to always have the next application in the pipeline, the next paper ready to publish. Institutions reinforce this by judging researchers on what’s easy to measure: publication counts, grant dollars, citation metrics. These are markers of production, not progress. Genuine scientific progress is hard to quantify on the timescales of tenure reviews. Production isn’t.
The optimized response is risk aversion. If you want to keep your position, you have one shot to prove your research program is worthwhile before tenure review arrives. You can’t afford to spend years chasing uncertain ideas. Under these pressures, scientists inevitably pursue safe, incremental projects that will reliably yield papers, even if they never significantly advance understanding. In a Pew survey of AAAS scientists, 69 percent said that a “focus on projects expected to yield quick results has too much influence on the direction of research.”
Now add AI to the mix. These tools excel at processing data and finding patterns in existing datasets. They’re exceptionally good at doing more of what’s already being done, faster. But science doesn’t progress primarily through optimized efficiency. As the Nature authors note, the history of major discoveries has been linked with new views on nature, not with optimized analysis of standing data.
The concentration of AI-assisted research on data-rich topics isn’t mysterious. AI tools need large datasets to function. The questions that lack abundant data—which may very well be some of the most important ones—are precisely the ones being left behind. Science under AI influence is beginning to look like the lamppost problem: searching where the light is brightest rather than where answers might actually be found.
Some scholars, as in a separate Nature article, have warned about “scientific monocultures,” where reliance on the same AI tools trained on the same data causes scientists’ questions and methods to converge. When AI is positioned as an objective collaborator that can overcome human bias, researchers may grant it unwarranted trust, believing they understand more than they do because the tools produce confident-seeming outputs from data the scientists never fully engaged with themselves.
None of this means AI tools are useless for science. They’re clearly useful to individual scientists. But zooming out, the bottleneck to scientific progress isn’t primarily technological but organizational. It’s about how we structure the scientific enterprise. In a candid interview, Mike Lauer, an ex-NIH official, dropped the following stunning facts on what he called a “fundamentally broken” system: scientists spend roughly 45 percent of their time on administrative requirements rather than doing science; grant applications have ballooned from four pages in the 1950s to over one hundred today; and worst of all, the average age at which a scientist receives their first major independent grant is now 45. Think about that: Someone can be trusted to perform brain surgery a full decade before they’re considered ready to run their own research program.
How did it get this way? In the biomedical sciences, one reason is that the NIH adopted a Depression-era funding model built around small, short-term project grants—a model one early critic warned would reduce science funding to “a dispensary of chicken feed.” This model isn’t just old; it’s categorically wrong for science. The short-term competitive proposal model treats researchers like vendors vying for a construction contract, requiring scientists to predict what they’ll discover over five years and then stick to the plan. But science can’t be planned like a construction project. Hypotheses fail, experiments surprise, and the most important discoveries often emerge from scientists following their curiosity to unexpected places.
These are unsexy problems of bureaucratic dysfunction, misaligned incentives, and institutional inertia. They will not be solved by giving scientists faster tools to navigate a fundamentally broken system. If anything, AI threatens to make things worse by accelerating production without touching the underlying reward structures, burying novel ideas in an even larger flood of incremental papers.
This is not an argument that AI can’t advance science. It already has, in fields from protein biology to nuclear fusion. But those breakthroughs involved AI systems designed to solve specific scientific problems. The broad adoption captured in the Nature study is something different: researchers using data-processing and language tools to do more of what they’re already doing, faster. And as the Science study suggests, when it comes to large language models in particular, “faster” may mean more manuscripts of lower quality dressed up in more polished prose. Science advances not only by solving well-defined problems but by generating new ones, and the institutional problems shaping how most scientists actually use AI are upstream of any technology.
AI companies, science funders, and policymakers seem to be treating AI as a magic accelerant—something to sprinkle into the scientific enterprise to make it go faster. But as one recent analysis put it, this is like adding lanes to a highway when the slowdown is actually caused by a tollbooth. The question isn’t how to build more lanes. It’s why the tollbooth is there in the first place.
Tim Requarth is director of graduate science writing and research assistant professor of neuroscience at the NYU Grossman School of Medicine, where he studies how artificial intelligence is changing the way scientists think, learn and write. He writes “The Third Hemisphere,” a newsletter that explores AI’s effects on cognition from a neuroscientist’s perspective.
Follow Persuasion on X, Instagram, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:







