CBO IS ALL-KNOWING

Jim Manzi, on the persistent overestimation of the costs of regulation:

Presumably the same awareness of the track record of asserted prior under-estimation of environmental costs was available to both the EPA and CBO as they prepared their cost estimates. Unless we wish to assert that they are biased or simply irrational, why would we assume they failed to incorporate this information into their (very similar) forecasts of costs by 2020?

One thing that recurs in Manzi’s writing on climate change issues is an extreme devotion to the infallibility of models. There is a belief, seemingly, that the moment something becomes known, it is seemlessly and perfectly modeled. In practice, this almost never happens. You could have twenty examples of regulatory changes in which multiple parties overestimated costs, but the variety in the types of rules at issue and variability of the cost errors would leave you no rigorous way to incorporate this into a model. Just because Doug Elmendorf can probably say that he’s going to overestimate the costing of Waxman-Markey doesn’t mean that he can say where and by how much, with the level of methodological surety necessary to allow him to include an adjustment of some sort. Manzi seems to convey the idea in his work that such a state of affairs ought to render a piece of information unusable, or irrelevant. But that’s a strange way to approach a problem — any problem.

You see this with his discussion of cost-benefit analysis in general. People say to Manzi, well, what if the predictions are off? Manzi replies, but of course, the modelers have thought of this and have built probability distributions to include all these difference possibilities, so when you ask “what if they’re off” you’re really asking “what if something happens that’s outside the distribution,” which means you’re just invoking the precautionary principle, which is daft, etc. But while Manzi seems to ascribe infallibility to the modelers, I’m sure the modelers are under no such illusions. They build the best distribution they can given various statistical constraints, but that doesn’t mean that the distribution in question necessarily takes into account all usable information. It doesn’t. It can’t.

And so you have lots of datapoints that got left out for one reason or another — didn’t have the necessary counterparts to make it into the panel dataset or whatever — that made them statistically unusable, and you have new datapoints coming in all the time which haven’t yet been incorporated into this model or that model based on data limitations and computing limitations and human limitations. And Manzi says that as far as he is concerned, there is no knowledge to be had from those datapoints.

It’s an intriguing approach to decision making, but I also think it’s not really the best one. It’s one thing to say that such-and-such pieces of information must be excluded from the model in order to make the best possible model. It’s another to think that that implies that the model’s output alone provides a superior understanding of an issue than the model’s output plus consideration of other unmodelable information.

I mean, I’m all about the IPCC findings, but I sure as hell want my policymakers looking at everything we’ve learned since 2007, you know what I mean?