Monday, October 26, 2009

Peak ignorance

Via the Economic Way of Thinking blog I came across this graph:


Let's think about this for a moment. As of now, oil is bought and sold on the world market, and demand for it is at least somewhat elastic (it's not the only possible source of energy). So how can demand possibly exceed supply, as the graph shows will inevitably happen? As far as I know, this can only be the case if someone can manage to keep oil prices below the market-clearing rate. Is that the prediction here, that by 2010 there will emerge an entity powerful enough to enact price controls in the world oil market? Or is it just that the Peak Oil advocates don't really know what they're talking about?

Added: Michael Munger is less kind, calling it Peak Idiocy.

Thursday, October 22, 2009

Look how well I can signal

Jeff Ely writes:
Suppose that what pundits want is to convince the world that they are smart (...) The thing about being really smart is that it means you are talking to people who aren’t as smart as you. So they can’t verify whether what you are saying is really true (...) But one thing the audience knows is that smart pundits can figure out things that lesser pundits cannot. That means that the only way a smart pundit can demonstrate to his not-so-smart audience that he is smart is by saying things different than what his lesser colleagues are saying, i.e. to be a contrarian.
The same is true of academia. Like most human activities, academia isn't about what it says it's about (in this case: seeking truth), but about signaling. Again, as in most human activities, academics are trying to signal high social status which, in their environment, comes with intelligence. Like pundits, academics are trying to convince their audience that they're smart, and they're doing it in the same way that pundits do it: by being contrarian (for which the academic term is "counterintuitive"). Their job is harder though, because their audience (and their competition) is smarter than the pundits', but the general idea is the same. Ever noticed how most academic papers in social sciences follow the same rule: state some conventional wisdom with which almost the entire audience agrees, and then try to knock it down by an intricately clever argument? The sounder the conventional wisdom, the better, because it means that your argument has to be that much more counterintuitive.

For example: you develop a game-theoretic model showing that alcohol addiction is not an issue of self-control, but rather a rational choice made by a logically omniscient, forward-looking agent who computes the long-term costs and benefits of all his possible consumption paths, and picks the best one (which may or may not involve drinking till his liver's done). Formal social science is full of this type of modeling, and if you're a skilled modeler, it can get you pretty far. Even Nobel prize far, as was the case with Gary Becker, the author of the "rational addiction" theory. (A profound critique of this type of "modeling purely to show off how clever you are" can be found here.)

Interestingly, Ely's post contains an idea that could be effective in neutralizing this:
when I was a first-year PhD student at Berkeley, Matthew Rabin taught us game theory. As if to remove all illusion that what we were studying was connected to reality, every game we analyzed in class was given a name according to his system of “stochastic lexicography.” Stochastic lexicography means randomly picking two words out of the dictionary and using them as the name of the game under study. So, for example, instead of studying “job market signaling” we studied something like “rusty succotash.”
By removing the illusion of the model having anything to do with reality, you're removing the possibility of it being counterintuitive, thus lowering its power as a signal of how smart you are in the eyes of those not as smart as you.

Why don't more social scientists do what Rabin did? The reason is simple (if somewhat ugly). By admitting that your models have nothing to do with reality, you're admitting that you're not doing social science, but applied mathematics. The problem with which is obvious: mathematicians are, on average, a lot smarter than social scientists. So if you admitted that what you were doing was in fact math, you'd have a harder time signaling how smart you were--because your new competitors would be that much smarter.

P.S. For the record, I do think that Gary Becker is indeed smart enough to be a mathematician, had he chosen to be one. But Becker is an outlier, and I'm writing about what's true on average.

P.P.S. Of course, you have to wonder about how honest Rabin was about what he was doing. He might have been countersignaling. He might have been saying: Look, game-theoretic models in social science have nothing to do with reality, and anyone who says they do is just trying to signal how counterintuitive and clever they can be. I, on the other hand, can afford to admit that those models are just mathematical games with no meaning, because I'm actually smart enough to hang with mathematicians.

Not-crowded-enough pricing

Congestion pricing is a great idea. But, it's bound to be hard to enact politically: how do you explain to your constituents that a price hike is actually good for them? Well, you could try to re-frame the idea as a price cut for off-peak riding.

Saturday, October 17, 2009

Department of self-reference department

Previously I blogged about Robin Hanson's idea of not trusting results of direct studies. More precisely, the idea is that whenever you want to learn what the correlation is between some variables Y and X, you shouldn't look at studies in which X is the main variable of interest (let's call these direct studies) as they're likely to be biased, but instead at studies in which X is a control variable (call these indirect studies). I'll call this the Hanson Hypothesis:
Results of direct studies are biased, whereas results of indirect studies are not.
Now think about this: what if we wanted do an empirical study of the Hanson Hypothesis? That is, we want to find out the effect of variable X (whether a given study is direct or indirect) on Y (the quality of study's results). Can we do that? We can't, because we'd be treating X as a variable of interest and therefore conducting a direct study, and results of direct studies are biased (by Hanson Hypothesis).

So far so good. Now let's make it a bit more complicated by stating what I'll call the Precise Hanson Hypothesis:
Most results of direct studies are biased in the direction of researcher's prior belief about those results.
That is, if a researcher believes that, say, obesity is bad for health and then conducts a study on the effects of obesity on health, his results will show that obesity is bad for health even if in fact it's not. Now think about testing the Precise Hanson Hypothesis. Let's say you commission testing it to some researcher and he gives you his results. Can his results be informative? Well, that depends on his prior beliefs. If before conducting his study he believed that the Precise Hypothesis was false, his results will be of no use to you. If the hypothesis is false, he'll come back with results that say it's false. But, if the hypothesis is true, he'll come back with the same results (because the results will be biased in the direction of his prior belief, which is that the hypothesis is false). However, if you ask someone whose prior belief is that the Precise Hanson Hypothesis is true (e.g. Robin Hanson), his results will be informative. If he comes back with a negative result, you'll know the hypothesis is actually false (for, if it were true, someone with a prior belief that it's true could only evaluate it in the positive). If he comes back with a positive result, you'll know the hypothesis is actually true (for, if it were false, anyone would come back with a negative result, regardless of their prior belief).

Let's make this weirder still. Now think about the Strong Hanson Hypothesis:
All results of direct studies are biased in the direction of researcher's prior belief about those results.
Can you test it empirically?

It turns out that you don't have to; the Strong Hypothesis is false, and it can be shown without any empirics. For suppose it's true, and suppose also that you ask someone who believes it's true to do an empirical study of it. He does the study, and his results are that the Strong Hypothesis is true. The question is: are the results biased or not? (By biased I mean that the researcher would return results
confirming his prior belief regardless of whether those beliefs were true.) Well, it's a direct study, so by the Strong Hypothesis it must be biased. This means that if the Strong Hypothesis were actually false, the researcher would still come back with results saying it's true. But if the hypothesis is false, it's impossible for anyone to have results saying it's true, so the study we commissioned can't be biased. In other words, it's an unbiased direct study, exactly what the Strong Hypothesis says can't exist. So the Strong Hypothesis is false. But we've just assumed it was true, so we have a contradiction, which means it actually is false. That's good news, I think.

Monday, October 12, 2009

Don't tell them what they're really working on

In a Bloggingheads conversation between Eliezer Yudkowsky and Andrew Gelman, there's a mention of an idea due to Robin Hanson. As far as I understand it, the idea is as follows: if you want to learn what the effect of variable X on variable Y is, do not look at studies that estimate the effect of X on Y. Instead, look at all the studies that estimate the effects of many different variables A, B, C, D etc. on Y that use X as a control variable. The reasoning behind this is that whenever a researcher sets off to estimate the effect of X on Y, he or she may already have a preconceived notion of what that effect is, and that notion is likely to bias the results; whereas, since no one is invested in what the effect estimates of their control variables might show, it is those estimates that are more trustworthy.

This is a great idea--and if correct, it has tremendous practical implications. For example, whenever governments want to learn what the effect of X on Y is, they tend to commission studies that estimate the effect of X on Y. Exactly the wrong thing to do. What they should do instead is to commission a bunch of studies to estimate the effects of a whole lot of other (meaning not X) variables on Y, all of them such that X would be an obvious control variable; then pool all those studies and see if they agree about X.

In other words, researchers shouldn't be told what the real purpose of the study is.

An immediate objection to this is that lots of times governments commission those studies not to learn the truth, but instead to reinforce their own preconceived notions. This might be true of hot, publicly debatable points; I can't imagine though that there are absolutely no situations whatsoever in which a government agency actually wants to know what the truth is. I bet such situations are especially common in agencies that do work that has very high stakes but the details of which are removed from media scrutiny and ideological debate (e.g. intelligence). But then again, maybe the method I described above is in some form already employed by such agencies; I wouldn't know.

Hanson's simple idea has many more interesting implications. Of which I'll write shortly.

First economics Nobel prize

Awarded to a political scientist, Elinor Ostrom. As far as non-economists getting this prize go, political scientists are pretty late to the party: the award has already been won by a mathematician, a law professor, a psychologist, and somewhat of a philosopher. But it's not really the first time that political science gets awarded the economics Nobel prize. The awarded work of economists Kenneth Arrow, Amartya Sen, and Thomas C. Schelling, is essentially political science going by different name.

Added: Freakonomics features a post about non-economist econ Nobel winners. I forgot about Hurwicz (background in law) and Aumann (another mathematician). And then there's von Hayek.

Thursday, October 8, 2009

Who's trading with us?

Below is a graph I made using WTO data:


First thing to note is that the financial crisis has caused an unprecedented slump in world trade. Second thing is that global imports do not equal global exports. How could this possibly be? Doesn't world trade = world imports = world exports, by definition? I see three possibilities:

1) Measurement error. (But then the weird thing is how closely those two volumes are tracking each other. You'd expect measurement error to be at least a little bit noisy. An error this systematic is easy to correct for.)

2) I'm missing something very obvious to someone who actually knows something about this stuff.

3) Unbeknownst to most of us, the Earth is trading with other planet(s) and running a slight deficit.

Why are there no posthumous Nobel prizes?

Nobel prizes can't be awarded posthumously. Suppose they could; how would things be different?

The way things are now, the worst case scenario is that a researcher deserving of the prize dies suddenly before having been recognized. (Of this the starkest example is probably John Stewart Bell, the author of one of the greatest discoveries of theoretical physics, who died of a stroke at 62 before being able to claim the prize that was rightfully his; but lots of slightly less outrageous omissions can be produced.) In order to minimize the probability of things like that happening, the Committee presumably disproportionately favors very old researchers (the reasoning being: X deserves the prize slightly more than Y does, but X is 43 and Y is 97 so we better give the award to Y while we still can). With the possibility of posthumous recognition, that would change. Cases similar to Professor Y would become less "urgent," so more of those scientists would get pushed down the line until they did actually die. And that's a cost, since being recognized while you're alive is certainly better than being recognized when you're dead (though the latter is not worthless; the Nobel prize comes with a considerable amount of money which you can leave to your loved ones). Another consequence would be that relatively younger and more deserving researchers would probably face shorter waiting times between finishing their Nobel-quality work and actually getting the prize.

Overall, I think having posthumous Nobels would be better, but the ultimate cost-benefit analysis depends on lots of details which I don't claim to know. And of course, the possibility of receiving the Nobel prize posthumously should be restricted to those researchers who were alive when the new rule came into effect. It wouldn't really be fair to contemporary scientists to have to all of a sudden face competition from Isaac Newton or David Ricardo.

Monday, October 5, 2009

A truly profound quote

From a somewhat unlikely source, Mike Tyson:
Everyone has a plan until they get punched in the face.
Eric Falkenstein calls it "inadvertently deep," which captures its essence perfectly. It seems to me that most of the famous quotes floating around are the exact opposite of that: trying very hard to appear clever, and failing miserably.