
Audio is streamed directly from the publisher (api.substack.com) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.
Show Notes
We’ve covered the US Agency for International Development, or USAID, pretty consistently on Statecraft, since our first interview on PEPFAR, the flagship anti-AIDS program, in 2023. When DOGE came to USAID, I was extremely critical of the cuts to lifesaving aid, and the abrupt, pointlessly harmful ways in which they were enacted. In March, I wrote, “The DOGE team has axed the most effective and efficient programs at USAID, and forced out the chief economist, who was brought in to oversee a more aggressive push toward efficiency.”
Today, we’re talking to that forced-out chief economist, Dean Karlan. Dean spent two and a half years at the helm of the first-ever Office of the Chief Economist at USAID. In that role, he tried to help USAID get better value from its foreign aid spending. His office shifted $1.7 billion of spending towards programs with stronger evidence of effectiveness. He explains how he achieved this, building a start-up within a massive bureaucracy. I should note that Dean is one of the titans of development economics, leading some of the most important initiatives in the field (I won’t list them, but see here for details), and I think there’s a plausible case he deserves a Nobel.
Throughout this conversation, Dean makes a point much better than I could: the status quo at USAID needed a lot of improvement. The same political mechanisms that get foreign aid funded by Congress also created major vulnerabilities for foreign aid, vulnerabilities that DOGE seized on. Dean believes foreign aid is hugely valuable, a good thing for us to spend our time, money, and resources on. But there's a lot USAID could do differently to make its marginal dollar spent more efficient.
DOGE could have made USAID much more accountable and efficient by listening to people like Dean, and reformers of foreign aid should think carefully about Dean’s criticisms of USAID, and his points for how to make foreign aid not just resilient but politically popular in the long term.
We discuss
* What does the Chief Economist do?
* Why does 170% percent of USAID funds come already earmarked by Congress?
* Why is evaluating program effectiveness institutionally difficult?
* Why don’t we just do cash transfers for everything?
* Why institutions like USAID have trouble prioritizing
* Should USAID get rid of gender/environment/fairness in procurement rules?
* Did it rely too much on a small group of contractors?
* What’s changed in development economics over the last 20 years?
* Should USAID spend more on governance and less on other forms of aid?
* How DOGE killed USAID — and how to bring it back better
* Is depoliticizing foreign aid even possible?
* Did USAID build “soft power” for the United States?
This is a long conversation: you can jump to a specific section with the index above. If you just want to hear about Dean’s experience with DOGE, you can click here or go to the 45-minute mark in the audio. And if you want my abbreviated summary of the conversation, see these two Twitter threads. But I think the full conversation is enlightening, especially if you want to understand the American foreign aid system. Thanks to Harry Fletcher-Wood for his judicious edits.
Our past coverage of USAID
For a printable transcript of this interview, click here:
Dean, I'm curious about the limits of your authority. What can the Chief Economist of USAID do? What can they make people do?
There had never been an Office of the Chief Economist before. In a sense, I was running a startup, within a 13,000-employee agency that had fairly baked-in, decentralized processes for doing things.
Congress would say, "This is how much to spend on this sector and these countries." What you actually fund was decided by missions in the individual countries. It was exciting to have that purview across the world and across many areas, not just economic development, but also education, social protection, agriculture. But the reality is, we were running a consulting unit within USAID, trying to advise others on how to use evidence more effectively in order to maximize impact for every dollar spent.
We were able to make some institutional changes, focused on basically a two-pronged strategy. One, what are the institutional enablers — the rules and the processes for how things get done — that are changeable? And two, let's get our hands dirty working with the budget holders who say, "I would love to use the evidence that's out there, please help guide us to be more effective with what we're doing."
There were a lot of willing and eager people within USAID. We did not lack support to make that happen. We never would've achieved anything, had there not been an eager workforce who heard our mission and knocked on our door to say, "Please come help us do that."
What do you mean when you say USAID has decentralized processes for doing things?
Earmarks and directives come down from Congress. [Some are] about sector: $1 billion dollars to spend on primary school education to improve children's learning outcomes, for instance. The President’s Emergency Plan for AIDS Relief (PEPFAR) [See our interview with former PEPFAR lead Mark Dybul] is one of the biggest earmarks to spend money specifically on specific diseases. Then there's directives that come down about how to allocate across countries.
Those are two conversations I have very little engagement on, because some of that comes from Congress. It’s a very complicated, intertwined set of constraints that are then adhered to and allocated to the different countries. Then what ends up happening is — this is the decentralized part — you might be a Foreign Service Officer (FSO) working in a country, your focus is education, and you’re given a budget for that year from the earmark for education and told, "Go spend $80 million on a new award in education." You’re working to figure out, “How should we spend that?” There might be some technical support from headquarters, but ultimately, you're responsible for making those decisions. Part of our role was to help guide those FSOs towards programs that had more evidence of effectiveness.
Could you talk more about these earmarks? There's a popular perception that USAID decides what it wants to fund. But these big categories of humanitarian aid, or health, or governance, are all decided in Congress. Often it's specific congressmen or congresswomen who really want particular pet projects to be funded.
That's right. And the number that I heard is that something in the ballpark of 150-170% of USAID funds were earmarked. That might sound horrible, but it's not.
How is that possible?
Congress double-dips, in a sense: we have two different demands. You must spend money on these two things. If the same dollar can satisfy both, that was completely legitimate. There was no hiding of that fact. It's all public record, and it all comes from congressional acts that create these earmarks. There's nothing hidden underneath the hood.
Will you give me examples of double earmarking in practice? What kinds of goals could you satisfy with the same dollar?
There’s an earmark for Development Innovation Ventures (DIV) to do research, and an earmark for education. If DIV is going to fund an evaluation of something in the education space, there's a possibility that that can satisfy a dual earmark requirement. That's the kind of thing that would happen. One is an earmark for a process: “Do really careful, rigorous evaluations of interventions, so that we learn more about what works and what doesn't." And another is, "Here's money that has to be spent on education." That would be an example of a double dip on an earmark.
And within those categories, the job of Chief Economist was to help USAID optimize the funding? If you're spending $2 billion on education, “Let's be as effective with that money as possible.”
That's exactly right. We had two teams, Evidence Use and Evidence Generation. It was exactly what it sounds like. If there was an earmark for $1 billion dollars on education, the Evidence Use team worked to do systematic analysis: “What is the best evidence out there for what works for education for primary school learning outcomes?” Then, “How can we map that evidence to the kinds of things that USAID funds? What are the kinds of questions that need to be figured out?”
It’s not a cookie-cutter answer. A systematic review doesn’t say, "Here's the intervention. Now just roll it out everywhere." We had to work with the missions — with people who know the local area — to understand, “What is the local context? How do you appropriately adapt this program in a procurement and contextualize it to that country, so that you can hire people to use that evidence?”
Our Evidence Generation team was trying to identify knowledge gaps where the agency could lead in producing more knowledge about what works and what doesn't. If there was something innovative that USAID was funding, we were huge advocates of, "Great, let's contribute to the global public good of knowledge, so that we can learn more in the future about what to do, and so others can learn from us. So let's do good, careful evaluations."
Being able to demonstrate what good came of an intervention also serves the purpose of accountability. But I've never been a fan of doing really rigorous evaluations just for the sake of accountability. It could discourage innovation and risk-taking, because if you fail, you'd be seen as a failure, rather than as a win for learning that an idea people thought was reasonable didn't turn out to work. It also probably leads to overspending on research, rather than doing programs. If you're doing something just for accountability purposes, you're better off with audits. "Did you actually deliver the program that you said you would deliver, or not?"
Awards over $100 million dollars did go through the front office of USAID for approval. We added a process — it was actually a revamped old process — where they stopped off in my office. We were able to provide guidance on the cost-effectiveness of proposals that would then be factored into the decision on whether to proceed. When I was first trying to understand Project 2025, because we saw that as a blueprint for what changes to expect, one of the changes they proposed was actually that process. I remember thinking to myself, "We just did that. Hopefully this change that they had in mind when they wrote that was what we actually put in place." But I thought of it as a healthy process that had an impact, not just on that one award, but also in helping set an example for smaller awards of, “This is how to be more evidence-based in what you're doing.”
[Further reading: Here’s a position paper Karlan’s office at USAID put out in 2024 on how USAID should evaluate cost-effectiveness.]
You’ve also argued that USAID should take into account more research that has already been done on global development and humanitarian aid. Your ideal wouldn't be for USAID to do really rigorous research on every single thing it does. You can get a lot better just by incorporating things that other people have learned.
That's absolutely right. I can say this as a researcher: to no one’s surprise, it's more bureaucratic to work with the government as a research funder than it is to work with foundations and nimble NGOs. If I want to evaluate a particular program, and you give me a choice of who the funder should be, the only reason I would choose government is if it had a faster on-ramp to policy by being inside.
The people who are setting policy should not be putting more weight on evidence that they paid for. In fact, one of the slogans that I often used at USAID is, "Evidence doesn't care who pays for it." We shouldn't be, as an agency, putting more weight on the things that we evaluated vs. things that others evaluated without us, and that we can learn from, mimic, replicate, and scale.
We — and the we here is everyone, researchers and policymakers — put too much weight on individual studies, in a horrible way. The first to publish on something gets more accolades than the second, third and fourth. That's not healthy when it comes to policy. If we put too much weight on our own evidence, we end up putting too much weight on individual studies we happen to do. That's not healthy either.
That was one of the big pieces of culture change that we tried to push internally at USAID. We had this one slide that we used repeatedly that showed the plethora of evidence out there in the world compared to 20 years ago. A lot more studies are now usable. You can aggregate that evidence and form much better policies.
You had political support to innovate that not everybody going into government has. On the other hand, USAID is a big, bureaucratic entity. There are all kinds of cross-pressures against being super-effective per dollar spent. In doing culture change, what kinds of roadblocks did you run into internally?
We had a lot of support and political cover, in the sense that the political appointees — I was not a political appointee — were huge fans. But political appointees under Republicans have also been huge fans of what we were doing. Disagreements are more about what to do and what causes to choose. But the basic idea of being effective with your dollars to push your policy agenda is something that cuts across both sides.
In the days leading up to the inauguration, we were expecting to continue the work we were doing. Being more cost-effective was something some of the people who were coming in were huge advocates for. They did make progress under Trump I in pushing USAID in that direction. We saw ourselves as able to help further that goal. Obviously, that's not the way it played out, but there isn't really anything political about being more cost-effective.
We’ll come back to that, but I do want to talk about the 2.5 years you spent in the Biden administration. USAID is full of people with all kinds of incentives, including some folks who were fully on board and supportive. What kinds of challenges did you have in trying to change the culture to be more focused on evidence and effectiveness?
There was a fairly large contingent of people who welcomed us, were eager, understood the space that we were coming from and the things that we wanted, and greeted us with open arms. There's no way we would've accomplished what we accomplished without that. We had a bean counter within the Office of the Chief Economist of moving about $1.7 billion towards programs that were more effective or had strong evaluations. That would've been $0 had there not been some individuals who were already eager and just didn't have the path for doing it.
People can see economists as people who are going to come in negative and a bit dismal — the dismal science, so to speak. I got into economics for a positive reason. We tried as often as possible to show that with an economic lens, we can help people achieve their goals better, period. We would say repeatedly to people, "We're not here to actually make the difficult choices: to say whether health, education, or food security is the better use of money. We're here to accept your goal and help you achieve more of it for your dollar spent.” We always send a very disarming message: we're there simply to help people achieve their goals and to illuminate the trade-offs that naturally exist.
Within USAID, you have a consensus-type organization. When you have 10 people sitting around a room trying to decide how to spend money towards a common goal, if you don't crystallize the trade-offs between the various ideas being put forward, you end up seeing a consensus built: that everybody gets a piece of the pie. Our way of trying to shift the culture is to take those moments and say, "Wait a second. All 10 might be good ideas relative to doing nothing, but they can't all be good relative to each other. We all share a common goal, so let's be clear about the trade-offs between these different programs. Let's identify the ones that are actually getting you the most bang for your buck."
Can you give me an example of what those trade-offs might be in a given sector?
Sure. Let's take social protection, what we would call the Humanitarian Nexus development space. It might be working in a refugee area — not dealing with the immediate crisis, but one, two, five, or ten years later — trying to help bring the refugees into a more stable environment and into economic activities. Sometimes, you would see some cash or food provided to households. The programs would all have the common goal of helping to build a sustainable livelihood for households, so that they can be more integrated into the local economy. There might be programs providing water, financial instruments like savings vehicles, and supporting vocational education. It'd be a myriad of things, all on this focused goal of income-generating activity for the households to make them more stable in the long run.
Often, those kinds of programs doing 10 different things did not actually lead to an observable impact over five years. But a more focused approach has gone through evaluations: cash transfers. That's a good example where “reducing” doesn't always mean reduce your programs just to one thing, but there is this default option of starting with a base case: “What does a cash transfer generate?"
And to clarify for people who don't follow development economics, the cash transfer is just, “What if we gave people money?”
Sometimes it is just that. Sometimes it's thinking strategically, “Maybe we should do it as a lump sum so that it goes into investments. Maybe we should do it with a planning exercise to make those investments.” Let's just call it “cash-plus,” or “cash-with-a-little-plus,” then variations of that nature. There's a different model, maybe call it, “cash-plus-plus,” called the graduation model. That has gone through about 30 randomized trials, showing pretty striking impacts on long-run income-generating activity for households. At its core is a cash transfer, usually along with some training about income-generating activity — ideally one that is producing and exporting in some way, even a local export to the capital — and access to some form of savings. In some cases, that's an informal savings group, with a community that comes and saves together. In some cases, it's mobile money that's the core. It's a much simpler program, and it's easier to do it at scale. It has generated considerable, measured, repeatedly positive impacts, but not always. There's a lot more that needs to be learned about how to do it more effectively.
[Further reading: Here’s another position paper from Karlan’s team at USAID on benchmarking against cash transfers.]
One of your recurring refrains is, “If we're not sure that these other ideas have an impact, let's benchmark: would a cash-transfer model likely give us more bang for our buck than this panoply of other programs that we're trying to run?”
The idea of having a benchmark is a great approach in general. You should always be able to beat X. X might be different in different contexts. In a lot of cases, cash is the right benchmark.
Go back to education. What's your benchmark for improving learning outcomes for a primary school? Cash transfer is not the right benchmark. The evidence that cash transfers will single-handedly move the needle on learning outcomes is not that strong. On the other hand, a couple of different programs — one called Teaching at the Right Level, another called structured pedagogy — have proven repeatedly to generate very strong impacts at a fairly modest cost. In education, those should be the benchmark. If you want to innovate, great, innovate. But your goal is to beat those. If you can beat them consistently, you become the benchmark. That's a great process for the long run. It’s very much part of our thinking about what the future of foreign aid should look like: to be structured around that benchmark.
Let's go back to those roundtables you described, where you're trying to figure out what the intervention should be for a group of refugees in a foreign country. What were the responses when you’d say, “Look, if we're all pulling in the same direction, we have to toss out the three worst ideas”?
One of the challenges is the psychology of ethics. There’s probably a word for this, but one of the objections we would often get was about the scale of a program for an individual. Someone would argue, "But this won't work unless you do this one extra thing." That extra thing might be providing water to the household, along with a cash transfer for income-generating activity, financial support, and bank accounts. Another objection would be that, "You also have to provide consumption and food up to a certain level."
These are things that individually might be good, relative to nothing, or maybe even relative to other water approaches or cash transfers. But if you’re focused on whether to satisfy the household's food needs, or provide half of what's needed — if all you're thinking about is the trade-off between full and half — you immediately jump to this idea that, "No, we have to go full. That's what's needed to help this household." But if you go to half, you can help more people. There's an actual trade-off: 10,000 people will receive nothing because you're giving more to the people in your program.
The same is true for nutritional supplements. Should you provide 2,000 calories a day, or 1,000 calories a day to more people? It's a very difficult conversation on the psychology of ethics. There's this idea that people in a program are sacrosanct, and you must do everything you can for them. But that ignores all the people who are not being reached at all.
I would find myself in conversations where that's exactly the way I would try to put it. I would say, "Okay, wait, we have the 2,000,000 people that are eligible for this program in this context. Our program is only going to reach 250,000. That's the reality. Now, let's talk about how many people we’re willing to leave untouched and unhelped whatsoever." That was, at least to me, the right way to frame this question. Do you go very intense for fewer people or broader support for more people?
Did that help these roundtables reach consensus, or at least have a better sense of what things are trading off against each other?
I definitely saw movement for some. I wouldn't say it was uniform, and these are difficult conversations. But there was a lot of appetite for this recognition that, as big as USAID was, it was still small, relative to the problems being approached. There were a lot of people in any given crisis who were being left unhelped. The minute you’re able to help people focus more on those big numbers, as daunting as they are, I would see more openness to looking at the evidence to figure out how to do the most good with the resources we have?” We must recognize these inherent trade-offs, whether we like it or not.
Back in 2023, you talked to Dylan Matthews at Vox — it's a great interview — about how it’s hard to push people to measure cost-effectiveness, when it means adding another step to a big, complicated bureaucratic process of getting aid out the door. You said,
"There are also bandwidth issues. There's a lot of competing demands. Some of these demands relate to important issues on gender environment, fairness in the procurement process. These add steps to the process that need to be adhered to. What you end up with is a lot of overworked people. And then you're saying, ‘Here's one more thing to do.’”
Looking back, what do you think of those demands on, say, fairness in the procurement process?
Given that we're going to be facing a new environment, there probably are some steps in the process that — hopefully, when things are put back in place in some form — someone can be thinking more carefully about. It's easier to put in a cleaner process that avoids some of these hiccups when you start with a blank slate.
Having said that, it's also going to be fewer people to dole out less money. There's definitely a challenge that we're going to be facing as a country, to push out money in an effective way with many fewer people for oversight. I don't think it would be accurate to say we achieved this goal yet, but my goal was to make it so that adding cost-effectiveness was actually a negative-cost addition to the process. [We wanted] to do it in a way that successfully recognized that it wasn't a cookie-cutter solution from up top for every country. But [our goal was that] the work to contextualize in a country actually simplified the process for whoever's putting together the procurement docs and deciding what to put in them. I stand by that belief that if it's done well, we can make this a negative-cost process change.
I just want to push a little bit. Would you be supportive of a USAID procurement and contracting process that stripped out a bunch of these requirements about gender, environment, or fairness in contracting? Would that make USAID a more effective institution?
Some of those types of things did serve an important purpose for some areas and not others. The tricky thing is, how do you set up a process to decide when to do it, when not? There's definitely cases where you would see an environmental review of something that really had absolutely nothing to do with the environment. It was just a cog in the process, but you have to have a process for deciding the process. I don't know enough about the legislation that was put in place on each of these to say, “Was there a better way of deciding when to do them, when not to do them?” That is not something that I was involved in in a direct way. "Let's think about redoing how we introduce gender in our procurement process" was never put on the table.
On gender, there's a fair amount of evidence in different contexts that says the way of dealing with a gender inequity is not to just take the same old program and say, "We're now going to do this for women." You need to understand something more about the local context. If all you do is take programs and say, "Add a gender component," you end up with a lot of false attribution, and you don't end up being effective at the very thing that the person [leading the program] cares to do.
In that Vox interview, your host says, "USAID relies heavily on a small number of well-connected contractors to deliver most aid, while other groups are often deterred from even applying by the process’s complexity." He goes on to say that the use of rigorous evaluation methods like randomized controlled trials is the exception, not the norm.
On Statecraft, we talked to Kyle Newkirk, who ran USAID procurement in Afghanistan in the late 2000s, about the small set of well-connected contractors that took most of the contracts in Afghanistan. Often, there was very little oversight from USAID, either because it was hard to get out to those locations in a war-torn environment, or because the system of accountability wasn't built there.
Did you talk to people about lessons learned from USAID operating in Afghanistan?
No. I mean, only to the following extent: The lesson learned there, as I understand it, wasn't so much about the choice on what intervention to fund, it was procurement: the local politics and engagement with the governments or lack thereof. And dealing with the challenge of doing work in a context like that, where there's more risk of fraud and issues of that nature.
Our emphasis was about the design of programs to say, “What are you actually going to try to fund?” Dealing with whether there's fraud in the execution would fall more under the Inspector General and other units. That's not an area that we engaged in when we would do evaluation.
This actually gets to a key difference between impact evaluations and accountability. It's one of the areas where we see a lot of loosey-goosey language in the media reporting and Twitter. My office focused on impact evaluation. What changed in the world because of this intervention, that wouldn’t otherwise have changed? By “change in the world,” we are making a causal statement. That's setting up things like randomized controlled trials to find out, “What was the impact of this program?” It does provide some accountability, but it really should be done to look forward, in order to know, “Does this help achieve the goals we have in mind?” If so, let's learn that, and replicate it, scale it, do it again.
If you're going to deliver books to schools, medicine to health clinics, or cash to people, and you’re concerned about fraud, then you need to audit that process and see, “Did the books get to the schools, the medicine to the people, the cash to the people?” You don't need to ask, "Did the medicine solve the disease?" There's been studies already. There's a reason that medicine was being prescribed. Once it's proven to be an effective drug, you don't run randomized trials for decades to learn what you already know. If it's the prescribed drug, you just prescribe the drug, and do accountability exercises to make sure that the drugs are getting into the right hands and there isn't theft or corruption along the way.
I think it's a very intuitive thing. There's a confusion that often takes place in social science, in economic or education interventions. They somehow forget that once we know that a certain program generates a certain positive impact, we no longer need to track continuously to find out what happens. Instead, we just need to do accountability to make sure that the program is being delivered as it was designed, tested, and shown to work.
There are all these criticisms — from the waste, fraud, and corruption perspective — of USAID working with a couple of big contractors. USAID works largely through these big development organizations like Chemonics. Would USAID dollars be more effective if it worked through a larger base of contractors?
I don't think we know. There's probably a few different operating models that can deliver the same basic intervention. We need to focus on, ”What actually are we doing on the ground? What is it that we want the recipients of the program to receive, hear, or do?” and then think backwards from there: "Who's the right implementer for this?" If there's an implementer who is much more expensive for delivering the same product, let's find someone who's more cost-effective.
It’s helpful to break cost-effective programming into two things: the intervention itself and what benefits it accrues, and the cost for delivering that. Sometimes the improvement is not about the intervention, it's about the delivery model. Maybe that’s what you're saying: “These players were too few, too large, and they had a grab on the market, so that they were able to charge too much money to deliver something that others were equally able to do at lower cost." If that's the case, that says, "We should reform our procurement process,” because the reason you would see that happen is they were really good at complying with requirements that came at USAID from Congress. You had an overworked workforce [within USAID] that had to comply with all these requirements. If you had a bid between two groups, one of which repeatedly delivered on the paperwork to get a good performance evaluation, and a new group that doesn't have that track record, who are you going to choose? That's how we ended up where we are.
My understanding of the history is that it comes from a push from Republicans in the ‘80s, from [Senator] Jesse Helms, to outsource USAID efforts to contractors. So this is not a left-leaning thing. I wouldn't say it is right-leaning either. It was just a decision made decades ago. You combine that with the bureaucratic requirements of working with USAID, and you end up with a few firms and nonprofits skilled at dealing with it.
It's definitely my impression that at various points in American history, different partisans are calling for insourcing or for outsourcing. But definitely, I think you're right that the NGO cluster around USAID does spring up out of a Republican push in the eighties.
We talked to John Kamensky recently, who was on Al Gore's predecessor to DOGE in the ‘90s.
I listened to this, yeah.
I'm glad to hear it! I’m thinking of it because they also pushed to cut the workforce in the mid-90s and outsource federal functions.
Earlier, you mentioned a slide that showed what we've learned in the field of development economics over the past 20 years. Will you narrate that slide for me?
Let me do two slides for you. The slide that I was picturing was a count of randomized controlled trials in development that shows a fairly exponential growth. The movement started in the mid-to-late 1990s, but really took off in the 2000s. Even just in the past 10 years, it's seen a considerable increase. There's about 4-5,000 randomized controlled trials evaluating various programs of the kind USAID funds.
That doesn't tell you the substance of what was learned. Here's an example of substance, which is cash transfers: probably the most studied intervention out there. We have a meta-analysis that counted 115 studies. That's where you start having a preponderance of evidence to be able to say something concrete. There's some variation: you get different results in different places; targeting and ways of doing it vary. A good systematic analysis can help tease out what we can say, not just about the effect of cash, but also how to do it and what to expect, depending on how it's done. Fifteen years ago, when we saw the first few come out, you just had, "Oh, that's interesting. But it's a couple of studies, how do you form policy around that?” With 115, we can say so much more.
What else have we learned about development that USAID operators in the year 2000 would not have been able to act upon?
Think about the development process in two steps. One is choosing good interventions; the other is implementing them well. The study of implementation is historically underdone. The challenge that we face — this is an area I was hoping USAID could make inroads on — was, studying a new intervention might be of high reward from an academic perspective. But it’s a lot less interesting to an academic to do much more granular work to say, "That was an interesting program that created these groups [of aid recipients]; now let's do some further knock-on research to find out whether those groups should be made of four, six, or ten people.” It's going to have a lower reward for the researcher, but it’s incredibly important.
It's equivalent to the color of the envelope in direct marketing. You might run tests — if this were old-style direct marketing — as to whether the envelope should be blue or red. You might find that blue works better. Great, but that's not interesting to an academic. But if you run 50 of these, on a myriad of topics about how to implement better, you end up with a collection of knowledge that is moving the needle on how to achieve more impact per dollar.
That collection is not just important for policy: it also helps us learn more about the development process and the bottlenecks for implementing good programs. As we’re seeing more digital platforms and data being used, [refining implementation] is more possible compared to 20 years ago, where most of the research was at the intervention level: does this intervention work? That's an exciting transition. It's also a path to seeing how foreign aid can help in individual contexts, [as we] work with local governments to integrate evidence into their operations and be more efficient with their own resources.
There's an argument I’ve seen a lot recently: we under-invest in governance relative to other foreign aid goals. If we care about economic growth and humanitarian outcomes, we should spend a lot more on supporting local governance. What do you make of that claim?
I agree with it actually, but there's a big difference between recognizing the problem and seeing what the tool is to address it. It's one thing to say, “Politics matters, institutions matter.” There's lots of evidence to support that, including the recent Nobel Prize. It’s another beast to say, “This particular intervention will improve institutions and governance.”
The challenge is, “What do we do about this? What is working to improve this? What is resilient to the political process?” The minute you get into those kinds of questions, it's the other end of the spectrum from a cash transfer. A cash transfer has a kind of universality: Not to say you're going to get the same impact everywhere, but it's a bit easier to think about the design of a program. You have fewer parameters to decide. When you think about efforts to improve governance, you need bespoke thinking in every single place.
As you point out, it's something of a meme to say “institutions matter” and to leave it at that, but the devil is in all of those details.
In my younger years — I feel old saying that — I used to do a lot of work on financial inclusion, and financial literacy was always my go-to example. On a household level, it's really easy to show a correlation: people who are more financially literate make better financial decisions and have more wealth, etc. It's much harder to say, “How do you move the needle on financial literacy in a way that actually helps people make better decisions, absorb shocks better, build investment better, save better?” It’s easy to show that the correlation is there. It's much harder to say this program, here, will actually move the needle. That same exact problem is much more complicated when thinking about governance and institutions.
Let's talk about USAID as it stands today. You left USAID when it became clear to you that a lot of the work you were doing was not of interest to the people now running it. How did the agency end up so disconnected from a political base of support? There's still plenty of people who support USAID and would like it to be reinstated, but it was at least vulnerable enough to be tipped over by DOGE in a matter of weeks.
How did that happen?
I don't know that I would agree with the premise. I'm not sure that public support of foreign aid actually changed, I'd be curious to see that. I think aid has always been misunderstood. There are public opinion polls that show people thought 25% of the US budget was spent on foreign aid. One said, "What, do you think it should be?" People said 10%. The right answer is about 0.6%. You could say fine, people are bad at statistics, but those numbers are pretty dauntingly off. I don't know that that's changed. I heard numbers like that years ago.
I think there was a vulnerability to an effort that doesn't create a visible impact to people's lives in America, the way that Social Security, Medicare, and roads do. Foreign aid just doesn't have that luxury. I think it's always been vulnerable. It has always had some bipartisan support, because of the understanding of the bigger picture and the soft power that's gained from it. And the recognition that we are a nation built on the idea of generosity and being good to others. That was always there, but it required Congress to step in and say, "Let's go spend this money on foreign aid." I don't think that changed. What changed was that you ended up with an administration that just did not share those values.
There's this issue in foreign aid: Congress picks its priorities, but those priorities are not a ranked list of what Congress cares about. It's the combination of different interests and pressures in Congress that generates the list of things USAID is going to fund.
You could say doing it that way is necessary to build buy-in from a bunch of different political interests for the work of foreign aid. On the other hand, maybe the emergent list from that process is not the things that are most important to fund. And clearly, that congressional buy-in wasn't enough to protect USAID from DOGE or from other political pressures.
How should people who care about foreign aid reason about building a version of USAID that's more effective and less vulnerable at the same time?
Fair question. Look, I have thoughts, but by no means do I think of myself as the most knowledgeable person to say, here's the answer in the way forward. One reality is, even if Congress did object, they didn't have a mechanism in place to actually object. They can control the power of the purse the next round, but we're probably going to be facing a constitutional crisis over the Impoundment Act, to see if the executive branch can impound money that Congress spent. We'll see how this plays out. Aside from taking that to court, all Congress could do was complain.
I would like what comes back to have two things done that will help, but they don’t make foreign aid immune. One is to be more evidence-based, because then attacks on being ineffective are less strong. But the