PLAY PODCASTS
On the abuse (and proper use) of climate models

On the abuse (and proper use) of climate models

Volts

January 27, 20231h 28m

Audio is streamed directly from the publisher (api.substack.com) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.

Show Notes

British researcher Erica Thompson’s recently published book is a thorough critique of the world of mathematical modeling. In this episode, she discusses the limitations of models, the role of human judgment, and how climate modeling could be improved.

(PDF transcript)

(Active transcript)

Text transcript:

David Roberts

Everyone who's followed climate change for any length of time is familiar with the central role that complex mathematical models play in climate science and politics. Models give us predictions about how much the Earth's atmosphere will warm and how much it will cost to prevent or adapt to that warming.

British researcher Erica Thompson has been thinking about the uses and misuse of mathematical modeling for years, and she has just come out with an absorbing and thought-provoking new book on the subject called Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It.

More than anything, it is an extended plea for epistemological humility — a proper appreciation of the intrinsic limitations of modeling, the deep uncertainties that can never be eliminated, and the ineradicable role of human judgment in interpreting model results and applying them to the real world.

As Volts listeners know, my favorite kind of book takes a set of my vague intuitions and theories and lays them out in a cogent, well-researched argument. One does love having one's priors confirmed! I wrote critiques of climate modeling at Vox and even way back at Grist — it's been a persistent interest of mine — but Thompson's book lays out a full, rich account of what models can and can't help us do, and how we can put them to better use.

I was thrilled to talk with her about some of her critiques of models and how they apply to climate modeling, among many other things. This is a long one! But a good one, I think. Settle in.

Alright, then, with no further ado, Erica Thompson, welcome to Volts. Thank you so much for coming.

Erica Thompson

Hi. Great to be here.

David Roberts

I loved your book, and I'm so glad you wrote it. I just want to start there.

Erica Thompson

That's great. Thank you. Good to hear.

David Roberts

Way, way back in the Mesozoic era, when I was a young writer at a tiny little publication called Grist—this would have been like 2005, I think—one of the first things I wrote that really kind of blew up and became popular was, bizarrely, a long piece about discount rates and their role in climate models. And the whole point of that post was, this is clearly a dispute over values. This is an ethical dispute that is happening under cover of science. And if we're going to have these ethical judgments so influential in our world, we should drag them out into the light and have those disputes in public with some democratic input.

And for whatever reason, people love that post. I still hear about that post to this day. So, all of which is just to say, I have a long-standing interest in this and models and how we use them, and I think there's more public interest in this than you might think. So, that's all preface. I'm not here to do a soliloquy about how much I loved your book. Let's start with just briefly about your background. Were you in another field and kept running across models and then started thinking about how they work? Or were you always intending to study models directly? How did you end up here?

Erica Thompson

Yeah, okay. So, I mean, my background is maths and physics. And after studying that at university, I went to do a PhD, and that was in climate change physics. So climate science about North Atlantic storms. And the first thing I did—as you do—was a literature review about what would happen to North Atlantic storms given climate change, more CO2 in the atmosphere. And so you look at models for that. And so, I started looking at the models, and I looked at them, and this was sort of 10-15 years ago now—and certainly there's more consensus now—but at that time, it was really the case that you could find models doing almost anything with North American storms.

You could find one saying... the storm tracks would move north, they'd move south, they'd get stronger, they'd get weaker, they'd be more intense storms, less intense storms. And they didn't even agree within their own aerobars. And that was what really stuck out to me, was that, actually, because these distributions weren't even overlapping, it wasn't telling me very much at all about North Atlantic storms, but it was telling me a great deal about models and the way that we use models. And so I got really interested in how we make inferences from models. How do we construct ranges and uncertainty ranges from model output? What should we do with it? What does it even mean? And then I've kind of gone from there into looking at models in a series of other contexts. And the book sort of brings together those thoughts into what I hope is a more cohesive argument about the use of models.

David Roberts

Yeah, it's a real rabbit hole. It goes deep. The book is focusing specifically on mathematical models, these sort of complex models that you see today in the financial system and the climate system. But the term "model" itself, let's just start with that because I'm not sure everybody's clear on just what that means. And you have a very sort of capacious definition.

Erica Thompson

I do, yeah.

David Roberts

...of what a model is. So just maybe let's start there.

Erica Thompson

Yeah. So, I mean, I suppose the models that I'm talking about mostly, when I'm talking in the book, is about complex models where we're trying to predict something that's going to happen in the future. So whether that's climate models, weather models—the weather forecast is a good example—economic forecasts, business forecasting, pandemic and public health forecasting are ones that we've all been gruesomely familiar with over the last few years. So those are kind of the one end of a spectrum of models, and they are the sort of big, complex, beast-end of the spectrum. But I also include, in my idea of models, I would include much simpler ones, kind of an Excel spreadsheet or even just a few equations written down on a piece of paper where you say, "I'm trying to sort of describe the universe in some way by making this model and writing this down."

But also I would go further than that, and I would say that any representation is a model insofar as it goes. And so that could include a map or a photograph or a piece of fiction—even if we go a bit more speculative—fiction or descriptions. These are models as metaphors. We're making a metaphor in order to understand a situation. And so while the sort of mathematical end of my argument is directed more at the big, complex models, the conceptual side of the argument, I think, applies all the way along.

David Roberts

Right, and you could say—in regard to mathematical models—some of the points you make are you can't gather all the data. You have to make decisions about which data are important, which to prioritize. So the model is necessarily a simplified form of reality. I mean, you could say the same thing about sort of the human senses and human cognitive machinery, right? Like, we're surrounded by data. We're constantly filtering and doing that based on models. So you really could say it's models all the way down.

Erica Thompson

Yes.

David Roberts

Which I'm going to return to later. But I just wanted to lay that foundation.

So in terms of these big mathematical models, I think one good distinction to start with—because you come back to it over and over throughout the book—is this distinction between uncertainty within the model. So a model says this outcome is 60% likely, right? So there's like a certain degree of uncertainty about the claims in the model itself. And then there's uncertainty, sort of extrinsic to the model, about the model itself, whether the model itself is structured so as to do what you want it to do, right? Whether the model is getting at what you want to get at.

And those two kinds of uncertainty map somehow onto the terms "risk" and "uncertainty."

Erica Thompson

Somehow, yes.

David Roberts

I'm not totally sure I followed that. So maybe just talk about those two different kinds of risks and how they get talked about.

Erica Thompson

So I could start with "risk" and "uncertainty" because the easiest way to sort of dispatch that one is to say that people use these terms completely inconsistently. And you can find in economics and physics, "risk" and "uncertainty" are used effectively in completely the opposite meaning.

David Roberts

Oh, great.

Erica Thompson

But generally one meaning of these two terms is to talk about "uncertainty," which is, in principle, quantifiable, and the other one is "uncertainty," which perhaps isn't quantifiable. And so in my terms, in terms of the book, so I sort of conceptualize this idea of "model land" as being where we are when we are sort of inside the model, when all of the assumptions work, everything is kind of neat and tidy.

You've made your assumptions and that's where you are. And you just run your model and you get an answer. And so within "model land," there are some kind of uncertainties that we can quantify. We can take different initial conditions and we can run them, or we can sort of squash the model in different directions and run it multiple times and get different answers and different ranges and maybe draw probability distributions.

But actually, nobody makes a model for the sake of understanding "model land." What we want to do is to inform decision making in the real world. And so, what I'm really interested in is how you take your information from a model and use it to make a statement about the real world. And that turns out to be incredibly difficult and actually much more conceptually difficult than maybe you might first assume. So you could start with data and you could say, "Well, if I have lots of previous data, then I can build up a statistical picture of how good this model is," whether it's going to be any good.

And so you might think of the models and the equations that sent astronauts to the moon and back. Those were incredibly good and incredibly successful. And many models are incredibly successful. They underpin the modern world. But these are essentially what I call "interpolatory models." They're basically...they're trying to do something where we have got lots of data and we expect that the data that we have are directly relevant for understanding whether the predictions in the future are going to be any good.

David Roberts

Right.

Erica Thompson

Whereas when you come to something like climate change, for example, or you come to any kind of forecasting of a social system, you know that the underlying conditions are changing, the people are changing, the politics are changing, even with the physics of the climate, the underlying physical laws, we hope, are staying the same. But the relationships that existed and that were calibrated when the Arctic was full of sea ice, for example, what do we have to go on to decide that they're going to be relevant when the Arctic is not full of sea ice anymore? And so we rely much more on expert judgment. And at that point, then you get into a whole rabbit hole of, well, what do we mean by expert judgment?

And maybe we'll come on to some of these themes later in the discussion, but these ideas of trust. So how are we going to assess that uncertainty and make that leap from model land back into the real world? It becomes really interesting and really difficult and also really socially, sort of, dependent on the modeler and the society that the model is in.

David Roberts

Right, it's fraught at every level. And one of the things that I really got from your book is that it's really, really far from straightforward to judge a model's quality. Like, you talk about... what is the term, a horse model? Based on the guy who used to make hand gestures at the horse, and the horse looked like it was doing addition, looks like it was doing math, but it turns out the horse was doing something else entirely. And so it only worked in that particular situation. If you took the horse out of that situation, it would no longer be doing math.

Erica Thompson

And I think what's interesting is that the handler wouldn't even have realized that. That it wasn't a deliberate attempt to deceive, it was the horse sort of picking up subconsciously or subliminally on the movement and the body language of the handler to get the right answer.

David Roberts

Right. Well, this is for listeners, this is kind of a show that this guy used to do. He would give his horse arithmetic problems and the horse would tap its foot and get the arithmetic right and everybody was amazed. And so your point is just you can have a model that looks like it's doing what you want it to do, looks like it's predictive, in the face of a particular data set, but you don't know a priori whether it will perform equally well if you bring in other data sets or emphasize other data sets or find new data. So even past performance is not any kind of guarantee, right?

Erica Thompson

Yeah. And so it's this idea of whether we're getting the right answer for the right reasons or the right answer for the wrong reasons. And then that intersects with all sorts of debates in AI and machine learning about explainability and whether we need to know what it's doing in order to be sure that it's getting the right answer for the right reasons or whether it doesn't actually matter. And performance is the only thing that matters.

David Roberts

So let's talk then about judging what's a good and bad model, because another good point you make, or I think you borrow, is that the only way to judge a model, basically, is relative to a purpose. Whether it is adequate to the purpose we're putting it to, there's no amount of sort of cleanliness of data or like cleverness of rules. Like nothing in the model itself is going to tell you whether the model is good. It's only judging a model relative to what you want to do with it. So say a little bit about the notion of adequacy to purpose.

Erica Thompson

Yeah. So this idea of adequacy for purpose is one that's really stressed by a philosopher called Wendy Parker, who's been working a great deal with climate models. And so, I guess, the thing is that what metric are you going to use to decide whether your model is any good? There is no one metric that will tell you whether this is a good model or a bad model. Because as soon as you introduce a metric, you're saying what it has to be good at.

I can take a photograph of somebody. Is it a good model of them? Well, it's great if you want to know what they look like, but it's not very good if you want to know what their political opinions are or what they had for dinner. And other models in exactly the same way. They are designed to do certain things. And they will represent some elements of a system or a situation well, and they might represent other elements of that situation badly or not at all. And not at all doesn't really matter because it's something that you can't sort of imagine it in. But if it represents it badly, then it may just be that it's been calibrated to do something else. So the purpose matters.

And when you have a gigantic model, which might be put to all manner of different purposes. So a climate model, for example, could be used by any number of different kinds of decision makers. So the question, "Is it a good model?" Well, it depends whether you are an international negotiator deciding what carbon emissions should be or whether you're a subsistence farmer in Sub-Saharan Africa or whether you're a city mayor who wants to decide whether to invest in a certain sort of infrastructure development or something or whether you're a multinational insurance company with a portfolio of risks. You will use it in completely different ways.

And the question of whether it is any good doesn't really make sense. The question is whether it is adequate for these different purposes of informing completely different kinds of decisions.

David Roberts

Right, or even if you're just thinking about mitigation versus adaptation, it occurs to me, different models might work better for those things. I guess the naive thing to think is, if you find one that's working well for your purpose that means it is more closely corresponding to reality than another model that doesn't work as well for your purpose. But, really, we don't know that. There's just no way to step outside and get a view of it relative to reality and ever really know that.

Erica Thompson

Yeah and reality kind of has infinitely many dimensions so it doesn't really make sense to say that it's closer. I mean, it can absolutely be closer on the dimensions that you decide and you specify. But to say that it is absolutely closer, I think, doesn't actually make sense.

David Roberts

Right, yeah. The theme that's running through the book over and over again is real epistemic humility.

Erica Thompson

Yes, very much so.

David Roberts

Which I think...you could even say it's epistemically humbling the book. That's sort of the way I felt about it.

Erica Thompson

Great. That's really nice. I'm glad to hear that.

David Roberts

Yeah, at the end, I was like "I thought I didn't know much and now I'm quite certain I know nothing at all."

Erica Thompson

But not nothing at all. I mean, hopefully, the way it ends is to say that we don't know nothing at all, we shouldn't be throwing away the models. They do contain useful information. We've just got to be really, really careful about how we use it.

David Roberts

Yes, there's a real great quote, actually, that I almost memorize is, "We know nothing for certain, but we don't know nothing," I think is the way you put in the book, which I really like. We're going to get back to that at the end, too. So another sort of fascinating case study that you mentioned, sort of anecdote that you mentioned that I thought was really, really revealing about sort of the necessity of human expert judgment in getting from the model to the real world is this story about the Challenger shuttle and the O-rings. The shuttle had flown test flights, several test flights beforehand using the same O-rings.

Erica Thompson

Yes.

David Roberts

...and had done fine. So there's sort of two ways you can look at that situation. What one group argued was: "A shuttle with these kind of O-rings will typically fail. And these successful flights we've had are basically just luck." Like, we've had several flights cluster on one side of the distribution, on the tail of the distribution and we can't rely on that luck to continue. And the other side said, "No, the fact that we've run all these successful flights with these O-rings is evidence that the structural integrity is resilient to these failed O-rings to the sort of flaws in the O-rings."

And the point of the story was: both those judgments are using the exact same data and the exact same models. And both judgments are consonant with all the data and all the models. So, the point being, no matter how much data you have—and even if people are looking at the same data and looking at the same models—in the end, there's that step of judgment at the end. What does it mean and how does it translate to the real world that you just can't eliminate, you need, in the end, good judgment.

Erica Thompson

Yeah, exactly. You can always interpret data in different ways depending on how you feel about the model. And so another example I give that is along very similar lines is thinking, sort of, if you were an insurance broker and you'd had somebody come along and sell you a model about flood insurance or about the likelihood of flooding. And they said a particular event would be pretty unlikely. And you use that and you write insurance. And then the following year, some catastrophic event happens and you get wiped out. What do you do next? Do you say, "Oh dear. It was a one-in-a-thousand-year event, what a shame. I'll go straight back into the same business because now the one-in-a-thousand-year event has happened."

David Roberts

Right. It's perfectly commensurate with the model.

Erica Thompson

It's perfectly commensurate with the model, exactly. So do I believe the model and do I continue to act as if the model was correct or do I take this as evidence that the model was not correct and throw it out and not go back to their provider and maybe not write flood insurance anymore?

David Roberts

Right.

Erica Thompson

And those are perfectly...either of those would be reasonable. If you have a strong confidence in the model, then you would take option A and if you have low confidence in the model, you take option B. But those are judgments which are outside of "model land."

David Roberts

Right, right. Judgments about the model itself. And it just may be worth adding that, there is no quantity of data or like detail in a model rules that can ever eliminate that judgment at the end of the line, basically.

Erica Thompson

Yeah, because you have to get out of "model land." I mean, now some parts of "model land" are closer to reality than others. So if we have a model of rolling a dice, right, you expect that to give you a reliable answer, quantitative. If you have a model of ballistic motion or they're taking astronauts to the moon and back, you expect that to be pretty good because you know that it's good because it's been good in the past. And there is an element of expert judgment because you're saying that my expert judgment is that the past performance is a good warrant of future success here. But that's a relatively small one and one that people would generally agree on. And then when you go to these more complex models and you're looking out into extrapolatory situations, predicting the future and predicting things where the underlying conditions are changing, then the expert judgment becomes a much bigger and bigger and bigger part of that.

David Roberts

Yes. And that gets into the distinction between sort of modelers and experts, which I want to talk about a little bit later, too. But one more sort of basic concept I wanted to get at is this notion of performativity, which is to say that models are not just representing things, they're doing things and they're affecting how we do things and they're not just sort of giving us information there, they're giving us what you call a "conviction narrative." So maybe just talk about performativity and what that means.

Erica Thompson

Yeah, so the idea of performativity is about the way that the models are part of the system themselves. So if you think about a central bank, if they were to create a model which made a forecast of a deep recession, it would probably immediately happen because it would destroy the market confidence. So that's a very strong form of performativity. Thinking about climate models, of course, we make climate models in order to influence and to inform climate policy. And climate policy changes the pathway of future emissions and changes the outcomes that we are going to get. So, again, the climate model is feeding back on the climate itself.

And the same, of course, with pandemic models which were widely criticized for offering worst-case scenarios. But obviously the whole point of predicting a worst-case scenario isn't to just sit around twiddling your thumbs and wait for it to come true, but to do something about it so that it doesn't happen. I suppose, technically, that would be called "counterperformativity" in the sense that you're making the prediction, and by making the prediction, you stop it from coming true.

David Roberts

Exactly. We get back, again, to, like, models can't really model themselves. It's trying to look at the back of your head in a mirror, ultimately there's an incompleteness to it. But I found this notion of a conviction narrative. I found the point really interesting that in some sense, in a lot of cases, it's probably better to have a model than to not have one, even if your model turns out to be incorrect. Talk about that a little bit. Just the way of the uses of models outside of sort of their strictly kind of representational informational.

Erica Thompson

Yeah, okay. So I guess thinking about this kind of performativity, and maybe counterperformativity, of models helps us to see that they are not just prediction engines. We are not just modeling for the sake of getting an answer and getting the right answer. We are doing something, which is much more social and it's much more to do with understanding and communication and generating possibilities and understanding scenarios and talking to other people about them and creating a story around it. And so that's this idea of a conviction narrative.

And what I've sort of developed in the book is the idea that the model is helping us to flesh out that conviction narrative. So, "conviction" because it helps us to gain confidence in a course of action, a decision in the real world, not in "model land." It helps us to...and then "narrative" because it helps us to tell a story. So we're, sort of, telling a story about a decision and a situation and a set of consequences that flow from that. And in the process of telling that story and thinking about all the different things, whatever you happen to have put into your model, and you're able to represent and you're able to consider within that, developing that story of what it looks like and developing a conviction that some particular course of action is the right one to do, or that you'll be able to live with it, or that it is something that you can communicate politically and generate a consensus about.

David Roberts

Right. And very frequently those things are good in and of themselves, even if they're inaccurate. You talk about some business research, which found that sort of like businesses with a plan do better than businesses without a plan. Even sometimes that the plan, it's not a particularly good plan, just because having a plan gives you that...just kind of a structured way of approaching and thinking about something.

Erica Thompson

Yeah. And so maybe this is one of the more controversial bits of the book, but I talk about, for example, astrology and systems where if you're a scientist like me, you will say, "Probably there is no predictive power at all in an astrological forecast of the future." Okay. Opinions may differ. I personally think that, essentially, they are random.

David Roberts

I think you're on safe ground here.

Erica Thompson

I think so. Probably with your audience, I am. But the point is that doesn't make them totally useless. So they can have genuinely zero value as prediction engines, but still be useful in terms of helping people to think systematically about possible outcomes, think about different kinds of futures, think about negative possibilities as well as positive ones, and put all that together just into a more systematic framework for considering options and coming to a course of action.

David Roberts

Right, or think about themselves.

And think about themselves and their own weaknesses and vulnerabilities as well as strengths. Yeah, absolutely. It gives you a structure to do that. And I think that is absolutely not to be underestimated. Because there's sort of those two axes. There's the utility of prediction, the accuracy of prediction: "How good is this model as a predictor of the future?" And then, completely orthogonally to that, there is: "How good is this model, in terms of the way that it is able to integrate with decision making procedures? Does it actually help to support good decision making?" And you can imagine all four quadrants of that.

Erica Thompson

Obviously, we sort of hope that models that are really good at predicting the future will be really good at helping to support decision-making. But, ultimately, if it could perfectly predict the future and it was completely deterministic and it just told you what was going to happen, that wouldn't be much use either. You're back into sort of Greek myths and Greek tragedies, actually being told your future is not that useful. You need to have some degree of uncertainty in order to be able to have agency and take action and have the motivation to do anything at all.

David Roberts

Yeah, so I guess I would say that astrological, astrology wouldn't have hung around for centuries, despite having zero predictive power.

Erica Thompson

If somebody didn't find it useful.

David Roberts

Right, if it did not have these other uses. I just thought that was a little bit sort of tacking the other way from a lot of the points, a lot of the points you're making in the book about the sort of weaknesses or limitations of models, et cetera, et cetera. But this was a point, I thought, where you sort of make the counterpoint that, it's almost always better to have a model than no model, it's better to have some...

Erica Thompson

Well, maybe. It depends what it is and it depends whose model it is and it depends what the agenda is of the person who's providing the model. And you can maybe take sort of both lessons from the astrology example because I think you can find good examples in the past of sort of vexatious astrologers or astrologers with their own hidden agendas. Giving advice, which was not at all useful or which was useful to themselves, but not to the person who commissioned the forecast.

David Roberts

Yes. Or like the king deciding whether to invade a neighboring country or something.

Erica Thompson

Right, yeah.

David Roberts

Not great for that. So given all these—and we've just really skated over them, there's a lot more to all these—but given these sort of limitations of mathematical models, this sort of inevitable uncertainty about whether you're including the right kinds of information, whether you're waiting different kinds of information well, whether past performance is an indicator of future performance, all these sort of limitations and the need for expert judgment all, to my mind, leads to what I think is one of your key points and one of the most important takeaways, which is the need for diversity. Diversity, I think, these days has kind of...the word conjures is sort of representational feel-good thing.

We need to have a lot of different kind of people in the room so we can feel good about ourselves and everybody can see themselves on the TV or whatever. But you're making a much more...very practical, epistemic point about the need for diversity of both models and modelers. So start with models. What would it mean to...like if I'm trying to forecast the future of the severe climate events, I think the naive, a naive sort of Western way of thinking about this would be: you need to converge on the right model, the one that is correct, right. The one that represents reality. And your point is: you never reach that. And so in lieu of being able to reach that, what works better is diversity. So say a little bit about that.

Erica Thompson

Yeah, that's exactly it. So, I suppose the paradigm for model development is that you expect to converge on the right answer, exactly. But I suppose what I'm saying is that because there can't—for various mathematical reasons—be a systematic way of converging on the right answer, because essentially because model space has infinitely many dimensions—go into that in a bit more detail for the more mathematically inclined—but because we don't have a systematic way of doing that, the statistics don't really work. So if you have a set of models, you can't just assume that they are independent and identically distributed, sort of throws at a dartboard and we can't just average them to get a better answer.

So the idea of making more models and trying to sort of wait for them to converge on this correct answer just doesn't actually make much sense. We don't want to know that by making more similar models, we will get the same answer and the same answer again and the same answer again. Actually, what we want to know is that no plausible model could give a different answer. So you're reframing the same question in the opposite direction. What would it mean to convince ourselves that no plausible model could give a different answer to that question. Well, instead of trying to push everything together into the center and, by the way, that's what the models that are submitted to the IPCC report, for example, do. They tend to cluster and to try to find consensus and to push themselves sort of towards each other. I'm saying we need to be pushing them away.

David Roberts

You talk about this drive for an Uber model, the, whatever the CERN of climate models, this push among a lot of climate models to find the sort of ER model, the ultimate model, and you are pushing very much in the other direction.

Erica Thompson

Yeah, I mean, that has a lot to commend it as a way to sort of systematize the differences between models rather than the ad hoc situation that we have at the moment. So I don't completely disagree with Tim Palmer and his friends who say that sort of thing. It's not a silly idea, it's a good idea, but I think it doesn't go far enough because it would help us to quantify the uncertainty within "model land," but it doesn't help us to get a handle on the uncertainty outside "model land," the gap between the models and the real world. And so what I'm saying is that if we want to convince ourselves that no other plausible model could give a different answer, then we need to be investigating other plausible models.

Now the word "plausible" is doing a huge amount of work there and actually then that is the crux of it is saying, well, how can we, as a community, define what we mean by a plausible model? Do we just define it sort of historically by...stick with climate for a minute. We've started with these models of atmospheric fluid dynamics and then we've included the ocean and then maybe we've included a carbon cycle and some vegetation and improved the resolution and all that sort of thing. But couldn't we imagine models which start in completely different places that model the same sorts of things?

And if you had got a more diverse set of models that you considered to be plausible and you found that they all said the same thing, then that would be really, very informative. And if you had a set of plausible models and they all said different things, then that would show you perhaps that the models that you had, in some sense, had a bit of groupthink going on, that they were too conservative and they were too clustered. And I do have a feeling that that is what we would find if we genuinely tried to push the bounds of the plausible model structures.

Now, actually, then you run into the question of plausible, and that's a difficult one, because now we're into sort of scientific expertise. Who is qualified to make a model? What do we mean by "plausible"? Which aspects are we prioritizing? And then we introduce value judgments. We say you have to be trained in physics or you have to have gone to an elite institution, you have to have x many years of experience in running climate models. You have to have a supercomputer. And all of these are, sort of, barriers to entry to have a model which can then be considered within the same framework as everybody else's. So this is another...then the social questions about diversity start coming up, but I start with the maths and I work towards the social questions. I think that we can motivate the social concerns about diversity directly in the mathematics.

David Roberts

Right, so you want a range of plausible models that's giving you...so you can get a better sense of the full range of plausible outcomes. But then you get into plausibility, you get into all kinds of judgments and then you're back to the modelers.

Erica Thompson

Exactly.

David Roberts

And you make the point repeatedly that the vast bulk of models used in these situations, in climate and finance, et cetera, are made by WEIRD people. I'm trying to think of the Western... you tell me.

Erica Thompson

Yeah, never quite sure exactly what it stands for. I think it's Western, Educated, Industrialized, Rich, and Developed, something like that. I suppose it's used to refer to the nation rather than the individual person. But it's the same idea.

David Roberts

Right. The modelers historically have been drawn from a relatively small...

Erica Thompson

From a very small demographic of elite people. Yeah, exactly.

David Roberts

And I feel like if there's anything we've learned in the past few years, it's that it is 100% possible for a large group of people drawn from the same demographic to have all the same blind spots and to have all the same biases and to miss all the same things. So, tell us a little bit about the social piece, then, because it's not like the notion that you should have a degree or some experience with mathematical models to make one and weigh in on them. It's not...

Erica Thompson

It's not unreasonable.

David Roberts

Crazy, right. How would we diversify the pool of modelers?

Erica Thompson

So that's what I mean, it's a really difficult question because it's what statisticians would call a "biospherians trade-off." You want people with a lot of expertise, relevant expertise, but you don't want to end up with only one person or one group of people being given all of the decision-making power. So how far, sort of, away from what you consider to be perfect expertise do you go? And I suppose there may be the first port of call is to say, well, what are the relevant dimensions of expertise? And you can start with perhaps formal education in whatever the relevant domain is, whether it's public health or whether it's climate science.

But I think, then, you have to include other forms of lived experience, you know, and I don't know what the answer looks like. You know, I say in the book as well, what would it look like if we were to get some completely different group of people to make a climate model or to make a pandemic model or whatever. It would look completely different. Maybe it wouldn't even be particularly mathematical or maybe it would be, but it would use some completely different kind of maths. Maybe it would be, you know, I just don't know because actually I'm one of these WEIRD, in inverted commas, people, myself. I happen to be female, but in pretty much every other respect, I'm as sort of standard modeler-type as it comes. So I just don't know what it would look like. But I think we ought to be exploring it.

David Roberts

As I think through the sort of practicalities of trying to do that, I don't know, I guess I'm a little skeptical since it seems to me that a lot of what decision makers want, particularly in politics, is that sense of certainty. And I'm not sure they care that much if it's faux certainty or false certainty or unjustifiable certainty. It is the sort of optics and image of certainty that they're after. So if you took that out of modeling, if the modelers themselves said, "Here's a suite of possible outcomes, how you interpret this is going to depend on your values and what you care about," that would be, I feel like, sort of, epistemologically more honest, but I'm not sure anyone would want that. The consumers of models, I'm not sure they would really want that.

Erica Thompson

But it's interesting. You say that that's a reason not to do it, I mean, surely that's a reason to do it. If the decision makers are, sort of, somewhat dishonestly saying, "Well actually I just want a number so that I can cover my back and make a decision and not have to be accountable to anyone else. I'm just going to say, 'Oh, I was following the science of course.'"

David Roberts

Right.

Erica Thompson

Well, that sounds like a bad thing. That sounds like a good reason to be diversifying, and that sounds like a good reason not to just give these decision-makers what they say they want.

There are maybe better arguments against it in terms of...is it even possible to integrate that kind of range of possible outputs into a decision making process? Like would we be completely paralyzed by indecision if we had all of these different forms of information coming at us? But I don't think that, in principle, it's impossible. For example, I would say that near-future climate fiction is just as good a model of the future as the climate models and integrated assessment models that we have. I would put it, kind of, not quite on the same level, but pretty close.

David Roberts

Have you read "The Deluge" or have you heard of "the Deluge"?

Erica Thompson

I've not read that one, no. I was thinking of maybe Kim Stanley Robinson's "Ministry for the Future." But other explorations of the near-future are available.

David Roberts

Right. I've read both. I just really have to recommend "The Deluge" to you. I just did a podcast with the author last week and it's a really detailed 2020 to 2040 walking through year-by-year. And, obviously, fiction is specific, right? So there's specific predictions, which are scientifically, sort of, you'd never let a scientist do that.

Erica Thompson

But you can explore the social consequences and you can think about what it means and how it actually works, how it plays out in a way that you can't in a sort of relatively low-dimensional climate model. You can draw the pictures, you can draw the sort of red and blue diagrams of where is going to be hot and where is going to be a bit cooler. But actually thinking about what that would look like and what the social consequences would be and what the political consequences would be and how it would feel to be a part of that future. That's something that models, the mathematical kind of models can't do at all. That's one of their...that's one of the axes of uncertainty that they just can't represent at all. But climate fiction can do extremely well.

David Roberts

Yeah, I was going to say that book got me thinking about these things in new ways, in a way that no white paper or new model or new IPCC ever has.

Erica Thompson

Exactly. But if you're thinking of the models as being, sort of, helping to form conviction narratives and they are sort of ways of thinking about the future and ways of thinking collectively about the future as well, as well as kind of exploring logical consequences, then in that paradigm, the climate fiction is really, genuinely, just as useful as the mathematical model.

David Roberts

Well, we've been talking about models in general and they're sort of limitations. So let's talk about climate specifically, because it sort of occurred to me, maybe this isn't entirely true, but like the epidemiological thing and the finance thing, both, in a sense, models play a big role in there, but there's also a lot of direct experiential stuff going on. But it's sort of like climate has come to us, the thinking public, almost entirely on the back of models, right? I mean, that's almost what it is. You know what I mean? Like you can see a severe weather event, but you don't know that doesn't say climate to you unless you already have the model of climate in your head.

So it's the most sort of thoroughly modelized field of sort of a human concern that there is. And so all the kind of dysfunctions that you talk about are very much on display in the climate world. Let's just start by pointing out, as you do, the sort of famous models that have been used to represent climate. DICE; William Nordhaus's DICE model is famous. One of the earliest and famous. One of the things it's famous for is him concluding that four degrees—right there is the perfect balance of mitigation costs and climate costs. That's the economic sweet spot.

And of course, like any physical scientist involved in climate who hears that, who's just going to fall out of their chair. Kevin Anderson, who you cite in your book, I remember almost word for word this quote of his in a paper where he basically says, "Four degrees is incommensurate with organized human civilization." Like, flat out. So that delta. tell us how that happened and what we should learn from that about what's happening in those DICE-style models.

Erica Thompson

Well, I think we should learn not to trust economists with Nobel Prizes. That's one starting point.

David Roberts

I'm cheering.

Erica Thompson

Good.

David Roberts

I'm over here cheering.

Erica Thompson

So, yeah, what can we learn from that? I mean, I think we can learn, maybe, for a starting point, the idea of an optimal outcome is an interesting one. Who says that there is an optimal? How can we even conceptualize trading off a whole load of one set of bad things that might happen with another set of bad things that might happen?

David Roberts

Imagine all the value judgments involved in that!

Erica Thompson

Exactly, exactly, exactly. You're turning everything into a scalar and then optimizing it. I mean, isn't...that weird, if anything?

David Roberts

Yes. And you would think, like, how should we figure out how we value all the things in our world? Well, let's let William Nordhaus do it.

Erica Thompson

Yes.

David Roberts

It's very odd when you think about it.

Erica Thompson

You can read many other, even better critiques of Nordhaus's work and, sort of, thinking about these different aspects of how the values of outcomes are determined and how things are costed, and of course, as he's an economist, everything is in dollars, so it's the sort of least-cost pathway is the optimal one. So it may indeed be that the lowest financial cost to global society is to end up at four degrees, but that will end up with something that looks very strange. Maybe there will be a lot more zeros in bank accounts. Great, fine. But is that really what we care about?

David Roberts

Right. How many zeros compensate for the loss of New Orleans or whatever?

Erica Thompson

Exactly. The loss of species across the planet and coral reefs and all the rest of it? I think even the concept that you can put these things on a linear scale and subtract one from the other just doesn't make sense.

David Roberts

And also, one of the amusing featu