Suppose that what pundits want is to convince the world that they are smart (...) The thing about being really smart is that it means you are talking to people who aren’t as smart as you. So they can’t verify whether what you are saying is really true (...) But one thing the audience knows is that smart pundits can figure out things that lesser pundits cannot. That means that the only way a smart pundit can demonstrate to his not-so-smart audience that he is smart is by saying things different than what his lesser colleagues are saying, i.e. to be a contrarian.The same is true of academia. Like most human activities, academia isn't about what it says it's about (in this case: seeking truth), but about signaling. Again, as in most human activities, academics are trying to signal high social status which, in their environment, comes with intelligence. Like pundits, academics are trying to convince their audience that they're smart, and they're doing it in the same way that pundits do it: by being contrarian (for which the academic term is "counterintuitive"). Their job is harder though, because their audience (and their competition) is smarter than the pundits', but the general idea is the same. Ever noticed how most academic papers in social sciences follow the same rule: state some conventional wisdom with which almost the entire audience agrees, and then try to knock it down by an intricately clever argument? The sounder the conventional wisdom, the better, because it means that your argument has to be that much more counterintuitive.
For example: you develop a game-theoretic model showing that alcohol addiction is not an issue of self-control, but rather a rational choice made by a logically omniscient, forward-looking agent who computes the long-term costs and benefits of all his possible consumption paths, and picks the best one (which may or may not involve drinking till his liver's done). Formal social science is full of this type of modeling, and if you're a skilled modeler, it can get you pretty far. Even Nobel prize far, as was the case with Gary Becker, the author of the "rational addiction" theory. (A profound critique of this type of "modeling purely to show off how clever you are" can be found here.)
Interestingly, Ely's post contains an idea that could be effective in neutralizing this:
when I was a first-year PhD student at Berkeley, Matthew Rabin taught us game theory. As if to remove all illusion that what we were studying was connected to reality, every game we analyzed in class was given a name according to his system of “stochastic lexicography.” Stochastic lexicography means randomly picking two words out of the dictionary and using them as the name of the game under study. So, for example, instead of studying “job market signaling” we studied something like “rusty succotash.”By removing the illusion of the model having anything to do with reality, you're removing the possibility of it being counterintuitive, thus lowering its power as a signal of how smart you are in the eyes of those not as smart as you.
Why don't more social scientists do what Rabin did? The reason is simple (if somewhat ugly). By admitting that your models have nothing to do with reality, you're admitting that you're not doing social science, but applied mathematics. The problem with which is obvious: mathematicians are, on average, a lot smarter than social scientists. So if you admitted that what you were doing was in fact math, you'd have a harder time signaling how smart you were--because your new competitors would be that much smarter.
P.S. For the record, I do think that Gary Becker is indeed smart enough to be a mathematician, had he chosen to be one. But Becker is an outlier, and I'm writing about what's true on average.
P.P.S. Of course, you have to wonder about how honest Rabin was about what he was doing. He might have been countersignaling. He might have been saying: Look, game-theoretic models in social science have nothing to do with reality, and anyone who says they do is just trying to signal how counterintuitive and clever they can be. I, on the other hand, can afford to admit that those models are just mathematical games with no meaning, because I'm actually smart enough to hang with mathematicians.
The question is why someone much smarter would signal in the first place. Would it be because they need to affirm that their abilities are higher than the abilities of others? If so, isn't that a little weird: what would smarter people care if less smart people affirmed what they already knew to be true. No, I think signaling has a lot to do with simple social acceptance and group status--that we, by signaling, at times in an encrypted but patterned way, are better than others, period, with less regard to the topic at hand than to the hierarchy we no doubt believe in, and are, obviously, at the top of. That may be what unifies academics as a whole, and they work feverishly to make sure that they keep their place. Either way, it is weird that those in the know have to know that they are in the know by letting those not in the know know.
ReplyDeleteSure, but peacocks lack consciousness, and humans--let's say the econ prof--come up with something they devise to match their perception of their appropriately higher intelligence, which is necessarily communicative, and which necessarily involves thinking about a possible interchange with someone, i.e. some sort of audience, which involves thinking that you know the other person's intelligence level as well as assuming that there can be no fluxuation in intelligence that is close in scale when it is manifested in outward behavior. Then, and only then, can you begin to create something you think will be decidedly smarter. In many cases, though not all, you might just have more time and resources to come up with something, or what you've created is in favor with those other beings/people that supercede both you/econ prof and the person you are trying to impress, in which case it is kind of a guessing game, if and only if you could never have access to the possible manifestation of the smartest being when you are only smarter than average.
ReplyDeleteThat's fair. All I'm pointing to is that there is an inherent reflexivity in signals. As recipients and producers of them, we seek out signals as well as signal outwardly, which implies that the signals we receive in some capacity impact how we send out signals. To the extent that we have access to signals smarter than we can indigineously produce, our nature is to copy and utilize them in a potentially under exposed area where they can be attributable to us, not the person or people we got them from.
ReplyDelete