Sunday, February 6, 2011

The modelers' oath: 'First, Do No Harm' should mean 'Beware Type I Error'

I want to come back to the "model makers' Hippocratic Oath," mainly to talk about what I think it's missing. Take a look at this item: "I understand that my work may have enormous effects on society and the economy, many of them beyond my comprehension." What bothers me about it is that it highlights the fact that the whole oath is self-contradictory: a modeler who does actually understand that her work may have enormous effects on society and the economy would write a completely different one. Let me explain.

The oath I quote has been written by people who develop models with the purpose that they be used to inform actual real-world decisions, but it reads like rules of ethical conduct for academics. And there's a difference. In the world of academia, where there's no central authority deciding which models are to be used and which are to be discarded, and where the criteria for judging models are (at least in theory) purely epistemic, a model can (in expectation) do no harm. Not so in the applied world. The most important thing an applied modeler needs to worry about is the ration of the expected costs of a type I versus a Type II error. (With respect to models, a type I error would be using a model when it's wrong, and a Type II error would be not using a model when it's right.) This is because whenever real-life decisions are made based on models, a Type I error is for some reason much more likely than a Type II one. It's probably tied to the "do something" cognitive bias that we have: whenever a crisis situation arises, we feel like changing the status quo is always better than doing nothing, no matter what the change may be. "I know this performance measure may be imperfect, but do you have a better one?" But Type I error can have disastrous consequences. Up until the 20th century, someone who became seriously ill was more likely to die if he followed a doctor's advice than if he did not; yet doctors were not changing their practices. Why? "I know it may sound crazy to drain your blood with leeches when you have pneumonia, but do you have a better cure?" It would of course be optimal if humankind understood that sometimes a better cure is no cure at all, but since that's not happening anytime soon, it's better if cure-makers adjust their behavior to the constraints as they currently are. To sum up, then, the single most important rule for an applied model maker should be this:
I will only reveal my model to those with power to apply it if I think the costs of a Type II error are much, much greater than the costs of a Type I error.
If you do not think that's the case, keep your model off the streets. (This doesn't mean you have to burn it. You can just keep it in academic journals where it will do no harm.)

No comments:

Post a Comment