Growth Is All You Need
How development economics forgot the most important thing.
There used to be an academic discipline centered on a straightforward question: what helps poor countries get richer? It was called development economics, and it was the intellectual engine behind sprawling government bureaucracies: USAID, Britain’s Department for International Development (DfID), the World Bank, and many others.
Across the rich world, left and right understood these entities to be in the national interest. Then, starting in the early 2000s, that discipline began to morph into something different—something so narrowly rooted in progressive pieties that, when the political winds shifted, the government programs built on its insights could be gutted without anyone much caring.
What happened?
The transformation of development economics can be traced back to 2003, when Abhijit Banerjee, Esther Duflo, and Sendhil Mullainathan founded the Abdul Latif Jameel Poverty Action Lab (J-PAL) at MIT.
J-PAL pioneered the use of randomized controlled trials (RCTs), a methodology explicitly cribbed from the world of clinical drug testing, to evaluate anti-poverty interventions. If you remember reading some eye-popping results about the impact of deworming pills in Kenya or microcredit in Bangladesh, you’ve already come within the orbit of J-PAL’s influence.
The RCT-boom soon grew into a cottage industry. Bed nets, textbooks, fertilizer subsidies—for idealistic young students around the turn of the 21st century, doing research into things like these felt virtuous. Just as importantly, the path to prestige publication was straightforward: each intervention was narrow enough to randomize, and each answer crisp enough to get into a great journal.
The J-PAL approach swallowed development economics virtually whole: by 2015, J-PAL affiliates had run over 1,000 randomized evaluations across 90 countries. In 2015, Banerjee, Duflo, and seven co-authors published “A Multifaceted Program Causes Lasting Progress for the Very Poor” in Science, reporting results from six RCTs across Ethiopia, Ghana, Honduras, India, Pakistan, and Peru. Their results showed that bundling asset transfers with coaching, savings support, and health services could durably improve consumption and employment rates for the extreme poor.
Then, in 2019, the Nobel Committee awarded its economics prize to Banerjee, Duflo, and their longtime collaborator Michael Kremer “for their experimental approach to alleviating global poverty.” A generation of graduate students got the message. If you wanted to publish in the top journals, if you wanted grants, tenure, attention, you pitched an RCT.
This isn’t at all what development economics used to be like. An older tradition in the profession still hangs on, exemplified by, say, Michele Peruzzi and Alessio Terzi’s 2021 study on the determinants of economic growth across countries. It’s an excellent, methodologically sophisticated paper… and it picked up 22 citations.
Figuring out why some countries suddenly start growing and others don’t is not, apparently, a question worthy of the profession’s best minds.
The problem with RCTs is that they only work for specific types of research question: narrowly tailored anti-poverty interventions that can be compared against each other. RCTs do not work for broader questions related to economic growth. You can randomize who gets deworming pills. You cannot randomize whether Indonesia adopts a better trade policy.
Listen, nobody seriously disputes that deworming pills work and mosquito nets improve health for the world’s poorest people. The question is whether narrow, RCT-friendly interventions like these actually help poor people get a foothold in the middle class.
That’s the very fraught territory that Lant Pritchett, the famously prickly Harvard development economist, recently tackled in a gloriously satisfying Substack post, soon to be a new paper. Pritchett has plainly had it up to here with RCTs. Which is why he’s chosen to take a rhetorical bulldozer to the house of J-PAL, pointing out something the profession ought to have known, but somehow forgot.
Pritchett begins by constructing a composite index of what he calls the “basics” of human material wellbeing—not income, but physical indicators: access to sanitation, child mortality rates, malnutrition, years of schooling, environmental quality. He then plots these basics against GDP per capita, and finds a correlation of around 0.9. That’s… the technical term is “insane.” Nothing researchers care about gets a correlation coefficient that high. 0.9 is basically an identity.
As in, Pritchett thinks GDP per capita is far and away the best indicator of the wellbeing of a country’s population.
Pritchett is a contrarian’s contrarian—early in his career, he made everyone mad by demonstrating with mathematical precision that rising educational attainment across countries had no measurable association with economic growth—and that its estimated effect on productivity was actually negative. The feel-good formulas that launched a thousand grant proposals are a red cape to this guy. He will charge.
Faced with this latest contrarian take, you might think “yeah ok, so Lant is just cherry-picking his index to flatter his thesis.” But no. The heart of his paper is the opposite of that. Pritchett systematically searches across all plausible combinations of indicators and weights to find the composite measure of basics that has the weakest relationship with GDP per capita. And even that worst-case measure still shows a powerful association.
Pritchett’s claim isn’t just that there exists some measure of wellbeing that tracks growth. He wants to make the much, much stronger claim that there is no plausible measure of basic well-being that growth doesn’t deliver.
The implications are uncomfortable for anyone who’s built a career around the idea that growth is necessary but not sufficient, that it needs to be “pro-poor,” that targeted interventions (the sort you can test with RCTs) are “equally important.” Pritchett shows that in poor countries like Ethiopia and Pakistan, virtually nobody has the basics in the way a middle-class person in a rich country does. Not the poor. Not even the middle class.
When nearly everyone lacks adequate sanitation and decent schooling, the distinction between “pro-poor” and “pro-rich” growth shades into meaninglessness. In poor countries, even pro-rich growth is effectively also pro-poor because, at low income levels, any growth has an outsized impact on the basics.
To be fair to J-PAL, its original sin was methodological, not political. Banerjee and Duflo aren’t progressive activists, they’re researchers. The question they asked was valid: which specific interventions work and which don’t?
The problem is that that question, asked a thousand times over, crowded out the rest of the field. If your entire discipline is organized around evaluating small-bore interventions, you will, without anyone deciding to, stop training people who think about national economic policy.
What filled the vacuum was something J-PAL’s founders probably didn’t intend and might not even recognize. As development economics lost interest in economic growth, the institutions that funded development work drifted toward a different operating logic altogether. Aid became less about helping countries build the conditions for broad-based prosperity and more about demonstrating measurable impact on identifiable beneficiaries. That opened the door to the wholesale capture of aid budgets by explicitly political projects dressed up in the language of equity, inclusion, and social justice.
J-PAL wasn’t responsible for any of this, of course—but the long retreat from growth did create the institutional space for drift. A field organized around the question “how do countries get rich?” has a natural center of gravity that resists ideological capture: growth is a bipartisan idea, and the policy levers that drive it (trade, regulation, macroeconomic stability) don’t code neatly as left or right.
But by the time USAID was funding DEI trainings in the developing world, the long march towards the micro that J-PAL initiated had spun entirely out of control. Careful empiricism gave way to modish activist-tinged development. It never seems to have occurred to the people backing the $1.5 million USAID grant for workplace training focused on the LGBTQI+ community in Serbia that they were giving Fox News a loaded pistol. The drag shows in Ecuador, the Peruvian LGBTQ+ comic book, and a dozen other head-turning State Department initiatives might merely have been a rounding error for the federal budget in terms of total cost—but they gave ideological opponents of development aid a generous attack surface.
If development economists have nothing useful to say about what helps economies grow, development bureaucracies aren’t long for this world. International development aid used to have bipartisan appeal and cross-party support—it was a foreign policy tool as much as a humanitarian one.
But as the field rebranded itself around targeting, redistribution, and the feel-good language of “reaching the most vulnerable,” it coded itself as a progressive cause. Aid became legible as charity at best, and woke nonsense at worst.
The backlash has been savage. In 2020, Prime Minister Boris Johnson folded Britain’s Department for International Development—once one of the world’s most respected bilateral aid agencies—into the Foreign Office, and slashed its budget in the process. Five years later, 60% of development adviser posts in Britain remain unfilled. The institutional expertise is simply gone.
Trump went further: in early 2025, he froze virtually all U.S. foreign aid and moved to dissolve USAID outright, canceling 83% of its programs. The largest development agency on Earth is being shuttered, and the political resistance has been thin.
An aid establishment that had spent the past few decades asking how to help countries grow—a question with obvious implications for trade, investment, security, and jobs—might have had allies on both sides of the aisle. The one we got didn’t.
You can read Pritchett’s essay as a dirge for a field that has committed seppuku. An entire academic discipline that had been about structural transformation turned into an internationalized branch of social work.
To be sure, Pritchett’s gloriously cathartic rant won’t change this. The academic prestige economy is too settled for that. But the data is the data, and what it says is simple: economic growth is not merely necessary for human wellbeing. In any plausible accounting, it is human wellbeing. People get the basics when countries get rich. No exceptions.
Everything else is a footnote—an interesting footnote, perhaps, but a footnote nonetheless.
Quico Toro is a contributing editor at Persuasion, the founder of Caracas Chronicles, Director of Climate Repair at the Anthropocene Institute, and writes the Substack One Percent Brighter. He lives in Tokyo.
Follow Persuasion on X, Instagram, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:






This post is kind of ignoring the reason that there was a turn to RCTs in the first place. It is notoriously difficult to get clean identification from cross-country studies, because you really can't control for most unobservables. As a result, you just end up with a bunch of correlations, often taken from the same limited data pool, which produce spurious results. The paper that you cite that is "methodologically sophisticaed" is case and point. It is not methodologically sophisticated; it is basically just correlations with some structural breaks mixed in (as an aside, I don't understand how a journalist with limited stats knowledge can even be making a claim like this). And that isn't because of some issue with the authors; it's because clean identification just isn't possible given the current tools that we have.
I'm all for answering big questions; I wish economics could do this. But can it? Economic macro models are pretty much devoid of predictive power and are so overfit that its unclear to me what their utility is at all. You can get a DSGE model to say just about anything. Macro in general seems to go through a cycle of crises where the entire field is questioned. The Lucas critique led to the current reign of DSGE models, and we can all see how well they did during the financial crisis. The whole field of macro is only limping along because there isn't anything clear to replace it.
The methodological question is not some sort of side show. It is the most important thing in the whole debate. What exactly is the point of answering big questions if all you produce is gibberish.