Tuesday, December 29, 2009

Non sequitur of the month: Very amusing, completely unimportant

This time it's a line from a TV commercial. It's so funny (in a non sequitur kind of way) that I just couldn't help but quote it. Some company is advertising its product by giving away free samples. One of the customers participating in the commercial says
They're giving it away for free? It must be good!
As Paul Krugman is fond of saying, not only is it not true, it's the exact opposite of truth.

An idea that changed the world. Can you name one?

At some point in a very interesting NOVA PBS show, a scholar says something we hear quite often: monotheism was a revolutionary idea that completely changed the world.

Did it really change the world, though? If so, how much?

Technically, we have no way of knowing. I may be wrong, of course, but it seems to me that we often misjudge the impact of ideas and historical events because we have an intuitive notion of causality that's just wrong. If we have some event A that happened at time T, we think of its causal impact on the history of the world as the difference between what the world looked like before and after time T. But that's incorrect. The world after any event is different than it was before it, so by this metric, everything would be "revolutionary" and "world-changing."

The true causal impact of A is the difference between the state of our world after time T and the state of a hypothetical world in which A did not happen at T (also after time T). In other words, we tend to forget that causal inference is necessarily about counterfactuals.

So the impact of monotheism is the difference between our world after monotheism appeared, and a counterfactual world in which it did not appear. I have no idea how to even begin to estimate this difference, but in my completely subjective opinion it is quite small.

Saturday, December 26, 2009

We like ladders better than trees

Sports fans are usually obsessed with rankings (be they player or team rankings); soccer fans are no different. FIFA (the international soccer organization) has its own rankings of national teams, but given how flawed those are, many soccer statisticians are trying to come up with their own. The best ones seem to be Voros McCracken's (his soccer blog is also very interesting) and Nate Silver's (the political scientist that runs the famous FiveThirtyEeight.com website).

The problem with rankings is that they imply a linear order that sometimes just doesn't exist in reality. For example, the usefulness of ranking soccer teams is presumably that if we see that, say, Spain is ranked no. 2 in the world whereas Denmark is ranked no. 12, then Spain is the more likely winner in a Spain vs. Denmark game. The problem is that linear rankings require transitivity (i.e., transitivity is a necessary condition for a relation to be a linear order.) The transitive property is easy to explain. Suppose Spain is a better soccer team than Denmark, and Denmark is better than Venezuela. Then it seems common sense that Spain is also better than Venezuela. If this is indeed the case and if, in fact, this property is true for any three teams we choose, then the relation "better soccer team than" is transitive. If there's no transitivity, we cannot rank soccer teams from best to worst. And in reality it isn't there. For example, all major soccer rankings put Mexico ahead of the U.S. But, at least in the recent eight years, the U.S. has a better head-to-head record against Mexico. It seems as though Mexico's record is superior to that of the U.S.--except when those two teams play each other. But this means there's no transitivity: Denmark is ahead of the U.S., Mexico is ahead of Denmark, and the U.S. is ahead of Mexico. The linearity of the order breaks down.

This is not to say that statistical methods of measuring team strength are useless. Far from it. Both Silver's and McCracken's systems, for example, are capable of producing the odds for essentially any given game. That's extremely useful--but very different than providing a consistent linear ordering of all teams in the world. The latter just does not exist in reality.

Coincidentally, there are many more examples of us trying to impose a linear order where there is none. It's often the case that we see a total order in situations where the true order is partial--i.e. we're trying to put things on a ladder where the true underlying structure is that of a branching tree. (Note: unlike the soccer rankings situation, a partial order is still transitive. It's just not linear.) For example, you sometimes hear the question: "If humans evolved from chimps, why are there still chimps?" The confusion from which this question arises is that of treating the relation "evolved from" as a linear order, whereas in reality it's a partial one (evolution is not a ladder but a tree; chimps are not our "parents" but "cousins:" our "parents" and chimps' "parents" were "siblings"). For another example, the way we teach grammar in middle schools is based on an implicit assumption that grammatically correct sentences can be derived from rules if sentences are treated as strings of words. This assumption is incorrect: as shown by Noam Chomsky, in order for us to be able to provide algorithmic rules for generating syntactically correct sentences of any human language, those sentences necessarily have to be treated as trees, not strings, of words and phrases.

Friday, December 18, 2009

Most important concepts we don't teach in the courtroom: signaling

Imagine you're a college math teacher. This coming semester, you're supposed to teach Calculus 1. As an only prerequisite for your course, you list a grade of at least A- in a pre-calc course. However, the pre-calc course offered students a choice between taking it for a grade or as "pass-fail." A student named Danny signed up for your class. He passed the pre-calc course, but opted out of taking it for a grade. You don't want to let him in your class, reasoning that no A-student would choose to hide his grade. This, however, makes Danny's dad very angry. Danny's dad wants to speak with you. Even worse, the very President of the college you teach at insists on being present during your this conversation.
Danny's dad: Do you hold the fact that Danny opted out of taking pre-calc for a grade against him?

You: This fact tells me something.

Danny's dad
: It shouldn't tell you anything.

You
: But him opting out of a grade signals me something.

College President
: You have to ignore the fact that Danny didn't want a grade. It doesn't signal anything.

You
: But it does.

President
: If you're selected as Danny's teacher, can you ignore the fact that he chose not to take a grade?

You
: I could try, but subconsciously I know why he's doing that.

President
:
Will you try your best not to be prejudiced?

You
: I'll try my best, but I can't control my subconscious thought that he's not taking a grade.

Danny's dad
: You're excused from teaching Calculus 1.
Sounds absurd, right? I mean, you're clearly right--the very fact that someone chose a "pass-fail" option over a grade option says that they're most likely not an A-student. This fact signals something, whether anyone likes it or not. And yet, the above conversation (as well as the flawed decision-making resulting from it) is commonplace in American courtrooms. Don't believe me? See this.

Thursday, December 17, 2009

Two thoughts

Substantively, they are completely unrelated. It's just that I think both are great observations, and wish they were mine.

First, Steven Landsburg writes
The Intelligent Design folk tell you that complexity requires a designer. The Richard Dawkins crowd tell you that complexity must evolve from simplicity. I claim they're both wrong, because the natural numbers, together with the operations of arithmetic, are fantastically complex, but were neither created nor evolved.
(By "fantastically complex" he means the unsolvability of the halting problem: it's logically impossible to write an algorithm that could tell you whether or not any given arithmetic problem is solvable in finite time.)

The second thought is Eric Falkenstein's:
... 150 years ago it would be proper to be a racist and think that the 'upper class' was a thoroughly different beast altogether, whereas today smarmy college freshmen at top universities think every human grouping possible has equal genetic ability and interest in every meaningful human dimension. People never get rid of their prejudices, they just change them.

Wednesday, December 16, 2009

It's always a good idea to look at the data

Before making a sweeping statement, no matter how commonsensical the statement may seem to you.

"Preventive medicine saves money" seems to be something virtually everyone believes. All mainstream politicians and political commentators certainly do. In fact, they all treat this statement as if it were simply self-evident, not as something that would need empirical verification. How can it be possible for preventive medicine to increase costs of healthcare?

Easy. Imagine you run a health insurance company and you cover a large population of people all of which are susceptible to a certain genetic disease. In a single patient, the disease is extremely expensive to treat if detected in its late stages, but fairly cheap to deal with when detected early. You can either pay to screen your customers and pay for their early treatment, or not screen them and just pay for late-stage treatment of whoever ends up getting sick. Holding costs constant, it is clearly possible that the latter choice costs less money, provided the contraction rate of the disease is low enough.

A 2008 metastudy from The New England Journal of Medicine claims that, on aggregate, preventive medicine does in fact increase the costs of healthcare.

Two things need to be remembered. First, preventive care increases aggregate costs; this means that most preventive measures increase costs, but not that all of them do. Some preventive measures do actually save money. More importantly, I was only considering the question of whether or not prevention saves money. Saying that something increases costs is different that saying it's a bad deal; even though prevention adds to the healthcare bill, it may still be cost-effective (if it improves the overall quality of life by an amount that offsets additional costs).

(HT: Healthcare Economist).

Tuesday, December 15, 2009

Most important concepts we don't teach in school: opportunity cost

Thomas Friedman's recent NYT column is another good example of a fallacy I've written about previously: focusing solely on probabilities and ignoring utilities when doing cost-benefit analysis. Essentially, Friedman thinks that the reasoning behind Cheney's "One Percent Doctrine" is sound, and applies it to the climate change debate. He writes
When I see a problem that has even a 1 percent chance of occurring and is "irreversible" and potentially "catastrophic," I buy insurance.
Really? You just "buy insurance?" Without even asking how much said insurance costs?

The root cause of Friedman's remarkably thorough confusion is the following: he thinks that insurance against the effects of global warming costs less than it actually does, because he is unfamiliar with the concept of opportunity cost. Friedman writes
If we prepare for climate change by building a clean-power economy, but climate change turns out to be a hoax, what would be the result? Well, during a transition period, we would have higher energy prices. But gradually we would be driving battery-powered electric cars and powering more and more of our homes and factories with wind, solar, nuclear and second-generation biofuels. ... In short, as a country, we would be stronger, more innovative and more energy independent.
Every single action we take has a cost in forgone opportunities--the value of the best thing we could have done instead. Producing electric cars, wind, solar and nuclear plants, and second-generation biofuels is no exception. So in the case of climate change, is this opportunity cost worth bearing? It depends on two things: the probability that global warming is real (which is extremely high) and the costs of it if we do nothing about it (which are extremely uncertain). The problem is that Firedman's reasoning simply ignores some of that crucial information.

Monday, December 14, 2009

Most important concepts we don't teach in school: expected utility

Imagine you're talking to an insurance salesperson. She tells you that DNA testing reveals you have a 1% chance of developing a certain form of cancer which, when untreated, is always fatal. She says there is a cure, but it's so expensive no one can afford it without insurance. She hands you an insurance contract to sign; if you do, the costs of treatment, should you need it, will be fully covered so even if you do get sick you will still live. You stare at the dotted line holding a pen, then ask what your premium would be. "Oh, we don't know that yet," she says. "But we'll get back to you on that real soon." Would you still sign the contract?

Of course not; only a complete idiot would. Whether or not it's worth it to insure yourself against a 1% chance of dying depends on the premium you'd have to pay. And yet we are offered exactly the same absurd sales pitch as described above when we're being sold public policy. I'm referring to what is now known (perhaps misleadingly) as Dick Cheney's "One Percent Doctrine." The name comes from the following line from the former Vice-President:
If there's a 1% chance that Pakistani scientists are helping al-Quaeda build or develop a nuclear weapon, we have to treat it as certainty in terms of our response.
This quote contains the very same error in reasoning as the insurance deal above; and both could be remedied if more attention was paid to the concept of expected utility.

Suppose you're trying to decide whether or not to carry out some action A. If you don't do A, there's an X-percent chance you will lose a Y amount of money. If you do A, you won't lose Y; however, doing A is costly too (say it costs Z). Expected utility says you should do A if and only if Z<X*Y. In other words, you need to compare the costs of action to the costs of inaction times the probability of bad things happening due to inaction. Cheney's doctrine focuses exclusively on the likelihood of bad outcome of inaction, without trying to balance it against the costs of action.

Saturday, December 12, 2009

The piano smells like a bomb

In my previous post I wrote about a problem with many public policies: the fact that we often focus only on their benefits while ignoring their costs. Many times we see government agencies implementing such policies being rewarded for their benefits while not absorbing any of their costs. TSA is a perfect example of such an agency. TSA gets rewarded for coming up with screening procedures that deter terrorists--but are not punished if those procedures are too costly for the non-terrorist passengers. TSA doesn't care if the lowered risk of terrorist acts due to their procedures warrants the huge amount of time wasted at the security gates, or the inconvenience caused to passengers diverted from flights or put on no-fly lists because of TSA's oversensitivity.

Or about the inconvenience caused to a world famous classical pianist by seizing and destroying his piano at the airport because it smelled funny. This happened to a Polish pianist Krystian Zimerman shortly after September 11 at New York's JFK airport. Apparently Zimerman alters his instruments by hand, and always travels to concerts with his own customized Steinway. On one of such trips, TSA confiscated his piano and then subsequently destroyed it because, as they said, the glue in it smelled like explosives. This has prompted Zimerman to 1) start traveling with his piano dismantled into little pieces that he would later reassemble on his own before concerts; 2) embark on a weird rant during one of his shows about how the U.S. military wants to control the entire world and 3) announce that he would cease to play in the U.S. altogether.

Shame on you, TSA, for the ridiculous "take off your shoes and belts" routine at the security gates, and for destroying Zimerman's piano. And here is how this piano sounds: Zimerman playing Chopin's Ballade No. 4 in F Minor (the most beautiful piece of music I have heard so far).




Wednesday, December 9, 2009

What we should be ashamed of

We like to feel morally superior to our ancestors. When we look back at our history, we like to feel outraged about grave sins committed by our great-grandparents (such as nazism, racism, etc.), and think that we, being more civilized than they were, are no longer capable of similar depravity or confusion. This is dead wrong, because we are doing things today that future generations will look at with horror. We are doing them primarily because most people either don't see anything wrong about them, or else don't even notice that they're being done.

What are those things then, that we should be ashamed of? I don't claim to know I'm right, but here are my candidates.

1) The fact that we are not equally compassionate to all groups of people who are suffering. Our compassion seems to depend on politics, ideology and religion. We are outraged at our great-grandparents that they did almost nothing to stop Holocaust; but we are doing almost nothing to help Palestinians suffering from the Israeli government, Chechens suffering at the hands of the Russian government, or the people of North Korea who are being terribly oppressed by the communist government. (Before you click, be warned: reading texts linked above is likely to make you sick to your stomach.)

2) Our inability to recognize that every policy has costs as well as benefits, and that they have to be weighed against each other. In many cases (especially those that involve "moral panic"), we ignore the costs altogether, which leads us to implement cures that are much worse than the disease (such as the drug war).

3) Tribalism. The tribal instinct is very powerful, and it can lead to morally unacceptable attitudes. We recognize the immorality of some of those attitudes (e.g. racism) and try to curb them; but there are many outlets in which we let tribal prejudice run rampant. For example, saying things like "Atheists cannot be moral people" is ethically equivalent to racism--but no one who says them (including politicians) faces any type of social punishment. Similarly, public discourse is full of arguments supporting tightening immigration laws or implementing trade protection measures, on the grounds that it would prevent American jobs from being taken over by foreigners. All such arguments rest on an implicit moral assumption that foreigners are less human than Americans--and yet no one who uses them seems to be ashamed of this fact. In fact, the postulate that immigrants should be treated as people is completely absent from mainstream American politics; as far as I know, only libertarian right and anarchist left support it.

4) The fact that we rely on moral intuitions to settle ethical dilemmas. We seem entirely unaware of the fact that our moral intuitions are often wrong, because they evolved to facilitate efficiency in a small band of hunter-gatherers, not to minimize suffering in a society as complex as ours. For example, according to our moral intuition, we tend to judge actions by the intent of those who act, instead of by the consequences of those actions, ignoring the fact that acting on selfish motives can have good consequences, or that acting on benevolent motives can have terrible consequences indeed.

Sunday, December 6, 2009

Battle of the sexes

If you are not convinced that nature is a cruel joke, take a look at the image above. What you see there is the penis of a beetle.

Any time you think life is rough for you, thank your lucky stars you're not a female beetle.

(HT: Pharyngula).

Friday, December 4, 2009

Does soccer reward cheating?

French striker Thiery Henry's recent dribbling the ball with his hand instead of his foot has lots of soccer fans asking this question. British economist Tim Harford thinks it does not; here's his reasoning:
Henry has been selfless. The rewards of his cheating go largely to his team-mates, who get to go to the World Cup with their names unblemished, and to fans of French football, once they get over the embarrassment – which they will. Henry himself faced all the risks. He might have been cautioned or sent off, but surely the far greater risk was what happened: only the TV cameras noticed the handball and a great striker’s reputation was tarnished. His subsequent pronouncements of guilt, shame and remorse have hardly put matters right. So, what would an economist have done? The answer is absolutely clear: economists would never cheat in front of the camera.
In other words, Harford thinks that when soccer players cheat it's because of an uncontrolled impulse rather than a deliberate response to incentives. From a single player's perspective, it doesn't pay to cheat, but sometimes the hand is just quicker than the head (or foot).

When it comes to Thiery Henry, Harford is probably right. There's no way the French striker could have though his blatant handball wouldn't be exposed on cameras, so it was probably just a reflex (that went unnoticed by the referee). But in general I think this is wrong; you don't always get caught on camera when you cheat. If your cheating is subtle enough, you'll have some room for plausible deniability despite the fact that the cameras will register what you did. Some blatantly dishonest plays don't look all that bad in replays.

Ignorance or confusion?

Dan Drezner talks about a recent Pew Research poll with a general theme of foreign policy. According to the poll, 44% of the population view China as "top global economic power" (only 27% think so about the U.S.) Drezner wonders (and I with him) how anyone could think that; there is absolutely no reasonable measure of economic power by which China can be regarded as the very top one. China's GDP is about two times smaller than the U.S. GDP; its output per capita is roughly eight times smaller.

It could be ignorance. Some respondents who see China as the world's biggest economic power may simply have no idea what the world's economies look like right now. Or it could be confusion: perhaps there are other respondents who aren't careful enough to distinguish between stock and flow. After all, for about a decade now, we keep hearing about China's amazingly fast GDP growth rate. That rate is in fact much higher than in the U.S. (for example, in the last three years, the average annual output growth rate in the U.S. was under three percent, whereas in China it was over eleven). But income growth rate and wealth are two completely different things.

Thursday, December 3, 2009

No numbers, no formulas

A few months back, Mark Chu-Carroll complained (and rightfully so) about some idiot on NPR who said that, since sudoku puzzles don't have to use numbers but can use any symbols, they're not mathematical puzzles at all.

It's a common misconception, I think: If something doesn't have lots of numbers or formulas in it, it's not math. In reality, math isn't about either of those things. In fact, we could in principle rewrite the entire body of mathematical knowledge so that it wouldn't contain one single formula. It would be extremely difficult and utterly pointless, of course, but nonetheless possible; we use formulas only because they're convenient, not because they're necessary. Scroll through Gottlob Frege's Foundations of Arithmetic; it's mostly just plain text, not a whole lot of formulas there. And yet it's definitely mathematics, and important mathematics at that; it contains the first logically correct definition of the concept of number.

Here's another example of this misconception at work. Back when I was in college, a friend of mine who was a psychology undergrad asked me to participate in an experiment he was conducting for his thesis (why he called something that had no control group an "experiment" I'll never know, but let's leave that for now). He gave me a sheet of paper with integers 1 through 10 randomly scattered about the page, interspersed with (also randomly scattered) letters A through J. He asked me to connect the integers in ascending order as quickly as I could, and timed me. Then he gave me another sheet just like the first one, and asked me to connect the letters in alphabetical order; this whole thing was then repeated a few times. When, after it was finished, I asked him what he was trying to get at he said he was interested in relative speeds with which brain hemispheres process information, and that this would allow him to do it because the task of connecting numbers was managed by the right hemisphere whereas the task of connecting letters was managed by the left one. I asked him how he knew that. He said: "Because the right hemisphere is responsible for mathematical reasoning so it must control connecting numbers, and the left hemisphere is responsible for verbal skills so it must control connecting letters." In other words, to him, the first task was mathematical (because it was about numbers), and the second one was verbal (because it was about letters), even though, quite clearly, conceptually it was the same damn task!

So now that we know that math is not about numbers or formulas, can we say what it is about? It's about a certain type of reasoning, I guess. What type of reasoning? Who knows, really. In this respect math is like porn: very hard to define, but very easy to recognize when you see an example.

Tuesday, December 1, 2009

A great employee with a bad hobby

If you are, like me, a fan of Arsenal F.C., you probably hate national team soccer. Things of the sort that has just happened to Arsenal's brilliant Dutch striker Robin van Persie happen to soccer clubs all over the world, and make their managers furious. In mid November, van Persie got called for national duty to play for the Netherlands in a friendly (i.e. sparring) game against Italy. In that game, he suffered an injury that rendered him unable to play for the rest of the 2009/2010 season. And this in effect means that Arsenal is no longer able to mount a serious campaign to win the English Premier League.

Think about this from a club manager's perspective. You buy players for your team, pay their wages, as well as invest money in coaching necessary to make sure their skills improve. In exchange, those players do their best to help your team win trophies. Of course, every once in a while during your trophy-winning campaign, some of your players will get injured. Soccer is very physical, sometimes downright brutal, so this is unavoidable. If this happens to one of your players important enough that their absence significantly diminishes team value, you'll probably whine a lot and curse your bad luck. You will not think it's unfair, however; after all, every player in every team faces a positive risk of injury in about every game they play. But then there are national teams, whose managers can call your players and use their services in their own competitions, thus exposing them to an additional risk of injury and, what's worse, when they do call them you cannot refuse (clubs that won't release their players for national duty face severe sanctions from soccer governing authorities such as FIFA or UEFA, ranging from fines to suspending a player or even revoking his license altogether).

To me, this situation is blatantly unfair to the clubs. What's also interesting is that most soccer fans I talk to do not see this fundamental unfairness, and tend to just dismiss club managers' complaints without even trying to provide an argument. In fact, there is only one rational argument in favor of the status quo that I've ever heard; it comes from a sports statistician Voros McCracken, and goes something like this. The very best soccer players are also more likely to want to play for their national teams (for reasons of prestige or whatever); therefore if you, as a club manager, wanted to be able to forbid them from doing so, you'd face a situation in which those very best wouldn't want to play for your club so you'd have to settle for choosing your employees from a weaker pool. You can think of the best soccer players as of great potential employees that have a dangerous hobby (namely, playing in national team competitions); since players that don't have that hobby tend to be weaker, you can't really complain about how unfair it is when one of your best employees gets hurt while exercising his dangerous hobby.

Rational as it may be, this argument is still wrong; here's why. The relevant question is not whether or not better soccer players are more likely to want to play for their national teams than weaker players do. The relevant question is: how important is the best players' desire to play for their national team as compared to the commitment to the club that currently employs them, in a situation when those two goals are in conflict? The problem is that, given the current set up, we can't know the answer, because whenever a player gets called for national team duty, it's not just his club that can't refuse; he himself can't, either. The price of such refusal is long suspension or perhaps even the end of career, and that's just prohibitive. In a perfect world, clubs would be free to draw contracts that could either let players partake in their national teams or forbid them to do so, and then players would be free to choose which type of contract they'd want to sign. If Voros were right, we'd see forbidding contracts paying out higher wages than the non-forbidding ones (holding player quality constant); if he were wrong, there'd be no such difference; and if he were really right, the monetary price that clubs would have to pay to make a player get rid of his hobby would be just too high, so there'd be no forbidding contracts at all. The thing is, as of now, neither the clubs nor the players have such freedom, so Voros' claim is perfectly unfalsifiable.

Interestingly, Voros' "dangerous hobby" analogy helps defeat his own argument. Great soccer players also are, on average, more likely to have personalities that make them seek thrills in dangerous activities such as motorcycle racing, mountain climbing, sky diving, bar fighting etc. Some clubs do draw contracts that actually contain clauses explicitly forbidding players to, say, ride bikes or climb mountains, on pain of being fined or even fired, and some players do choose to sign such contracts. Until players are allowed to exercise the same amount of choice over national team duty versus club duty, we can't claim to know which is more important to them.

Thursday, November 12, 2009

Non sequitur of the month: "either-or" means "fifty-fifty"

Very frequently I experience an urge to write a post with no other purpose than to just mock some piece of extraordinary stupidity that I've encountered in the media. I try to fight that urge, as writing posts that do nothing but attack an easy target is kind of cheap. However, sometimes the temptation is just too strong. For one thing, shooting fish in the barrel can occasionally be fun. For another, psychologists tell me that it's unhealthy to suppress anger, and non sequiturs make me extremely angry. I take them very personally.

Therefore, in order to channel that anger, I decided to start a periodical feature called "Non Sequitur of the Month." In it, I'll try to present truly spectacular errors in reasoning. Fireworks of stupidity, if you will. And since pretty much any fallacy can in principle be stated as a non sequitur, I don't anticipate having trouble keeping things going.

First up is one Anna Cieślak, a journalist working for the second-largest Polish daily newspaper, Rzeczpospolita (title means "The Republic"). She has a blog, and on that blog, there's a post on unemployment. The very first two sentences of that post, in translation, read
From the micro point of view, i.e. from the point of view of an individual, it makes no practical difference whether the unemployment rate will climb to 12.5 or to 13.5 percent. You'll either get fired or you won't, and therefore your risk of becoming unemployed is 50 percent anyway.
Yes, you read that right. Apparently, the author believes that whenever the outcome variable is binary, it necessarily means that each alternative occurs with probability one-half. I should immediately start playing lottery; since I can either win the jackpot or not, it must mean that I have a fifty percent chance of winning.

This is truly a remarkable, monumental piece of stupidity. I believe the first entry in the Non Sequitur of the Month category may well turn into a non sequitur of the year.

Wednesday, November 4, 2009

You can be a Marxist and not even know it

Some (perhaps most) U.S. farmers hate welfare. But, almost all of them receive it (only theirs has a different name; it's called "farm subsidies"). So, are farmers who hate welfare being hypocritical?

The answer is no, they're not, and that's because they're Marxists. Of course, none of them would call themselves a Marxist, and I'm sure most of them hate Marxism at least as much as they hate welfare. But nonetheless, they are, in that they subscribe to Marxist theory of value through labor. Most farm subsidies require extremely hard work to qualify for and this, probably, is why farmers don't perceive subsidies as welfare.

Of course, this isn't really about farmers. The old Marxist error of confusing input with output is very widespread, and probably derives from deep-seated intuitions: how can it be possible for hard work to not have value? I've met tons of people, many of them raging conservatives, who were Marxists in this particular respect.

Monday, November 2, 2009

Don't play the lottery. And don't go to Harvard.

Every once in a while when I'm trying to persuade someone that from a monetary perspective it's irrational to buy lottery tickets, I hear the following response: But someone wins it. In this particular context, that response is a trivial mistake of not differentiating between conditional and unconditional probabilities: sure the probability of someone winning is close to one (not quite one but almost there), but what you should be concerned with is the conditional probability of that someone being you, which is, well, pretty close to zero.

Is the same mistake responsible for the popularity of the "American dream" (as in: surely, a story of "from rags to riches" will happen every once in a while, but what makes you think it'll happen to you)? And if so--is this mistake evolutionarily deliberate, in that if we knew the true odds, we'd all just stop trying?

I think the answer is yes and no, respectively. Yes, it is the same mistake; but the reasons we're making it are not uniform. We are able to recognize some conditional probability situations (i.e. poker hands or SAT problems) as such, while others (like the lottery or the American dream), we are not. Different context fools us into thinking those things are different in essence. Whereas if the mistake were evolutionarily selected for, we just wouldn't be able to deal with conditional probabilities at all, ever.

Sunday, November 1, 2009

Tips and dirty looks

This happened to me many times: I buy a $1.50 cup of coffee in a coffee shop somewhere, put a quarter in a tip jar, and promptly get a dirty look that says, "Anything less than a buck, don't even bother."

Of course, tipping a dollar on a $1.50 purchase is ridiculous; a quarter is a fair tip, if you decide to leave any. I basically see two choices: you can either not tip at all and receive the same amount of dirty looks but save some money, or tip a dollar once every four purchases and zero three out of four times, thereby spending the same amount of money as if you were tipping fairly all the time but decreasing the number of dirty looks shot at you by one-fourth.

It's more or less about more or less health insurance

It takes an incredible amount of work to develop an intelligent opinion on a really complicated subject, which is why I don't have much of one on healthcare reform. But, I have links.

It's fairly clear what the problems are: 1) there are a lot of people who receive too little healthcare, mostly because they are un- or underinsured, and 2) aggregate healthcare costs are out of control; as of now annual costs consume almost 18% of GDP, which is far more than in any country in the world, and they are rising faster than inflation, so things are getting worse.

Naturally, if some people underconsume but costs are rising, it must mean that some other people overconsume. This much almost everyone agrees with. The problem is with determining exactly who it is that overconsumes the most. The main candidates are two: sick people or rich people.

The first possibility isn't really that sick people overconsume healthcare but rather that there are too many sick people relative to healthy people in the insurance pool. Every insurance system is a scheme in which people who don't need it subsidize those who do, but in the case of health insurance, too many of those who don't need it opt out altogether, so a shortage of subsidies drives up the costs (this is called "adverse selection"). The second possibility is that people who can afford health insurance buy it and then, since they don't have to worry about the cost of most services, they use those services without worrying how much of them they really need, again driving up the costs.

Which of these alternatives is true is an empirical question, and not an easy one to answer. But note that a lot depends on the answer in terms of what's to be done about the situation. If you believe that what's going on is adverse selection, you'll want more health insurance. Specifically, you'll want health insurance to be compulsory, so that the risk would be pooled more efficiently. But if you believe in the second alternative, you'll want less health insurance, and will want to institute more health savings accounts instead.

Here's an article making the case for possibility #1, and here's one for #2.

Monday, October 26, 2009

Peak ignorance

Via the Economic Way of Thinking blog I came across this graph:


Let's think about this for a moment. As of now, oil is bought and sold on the world market, and demand for it is at least somewhat elastic (it's not the only possible source of energy). So how can demand possibly exceed supply, as the graph shows will inevitably happen? As far as I know, this can only be the case if someone can manage to keep oil prices below the market-clearing rate. Is that the prediction here, that by 2010 there will emerge an entity powerful enough to enact price controls in the world oil market? Or is it just that the Peak Oil advocates don't really know what they're talking about?

Added: Michael Munger is less kind, calling it Peak Idiocy.

Thursday, October 22, 2009

Look how well I can signal

Jeff Ely writes:
Suppose that what pundits want is to convince the world that they are smart (...) The thing about being really smart is that it means you are talking to people who aren’t as smart as you. So they can’t verify whether what you are saying is really true (...) But one thing the audience knows is that smart pundits can figure out things that lesser pundits cannot. That means that the only way a smart pundit can demonstrate to his not-so-smart audience that he is smart is by saying things different than what his lesser colleagues are saying, i.e. to be a contrarian.
The same is true of academia. Like most human activities, academia isn't about what it says it's about (in this case: seeking truth), but about signaling. Again, as in most human activities, academics are trying to signal high social status which, in their environment, comes with intelligence. Like pundits, academics are trying to convince their audience that they're smart, and they're doing it in the same way that pundits do it: by being contrarian (for which the academic term is "counterintuitive"). Their job is harder though, because their audience (and their competition) is smarter than the pundits', but the general idea is the same. Ever noticed how most academic papers in social sciences follow the same rule: state some conventional wisdom with which almost the entire audience agrees, and then try to knock it down by an intricately clever argument? The sounder the conventional wisdom, the better, because it means that your argument has to be that much more counterintuitive.

For example: you develop a game-theoretic model showing that alcohol addiction is not an issue of self-control, but rather a rational choice made by a logically omniscient, forward-looking agent who computes the long-term costs and benefits of all his possible consumption paths, and picks the best one (which may or may not involve drinking till his liver's done). Formal social science is full of this type of modeling, and if you're a skilled modeler, it can get you pretty far. Even Nobel prize far, as was the case with Gary Becker, the author of the "rational addiction" theory. (A profound critique of this type of "modeling purely to show off how clever you are" can be found here.)

Interestingly, Ely's post contains an idea that could be effective in neutralizing this:
when I was a first-year PhD student at Berkeley, Matthew Rabin taught us game theory. As if to remove all illusion that what we were studying was connected to reality, every game we analyzed in class was given a name according to his system of “stochastic lexicography.” Stochastic lexicography means randomly picking two words out of the dictionary and using them as the name of the game under study. So, for example, instead of studying “job market signaling” we studied something like “rusty succotash.”
By removing the illusion of the model having anything to do with reality, you're removing the possibility of it being counterintuitive, thus lowering its power as a signal of how smart you are in the eyes of those not as smart as you.

Why don't more social scientists do what Rabin did? The reason is simple (if somewhat ugly). By admitting that your models have nothing to do with reality, you're admitting that you're not doing social science, but applied mathematics. The problem with which is obvious: mathematicians are, on average, a lot smarter than social scientists. So if you admitted that what you were doing was in fact math, you'd have a harder time signaling how smart you were--because your new competitors would be that much smarter.

P.S. For the record, I do think that Gary Becker is indeed smart enough to be a mathematician, had he chosen to be one. But Becker is an outlier, and I'm writing about what's true on average.

P.P.S. Of course, you have to wonder about how honest Rabin was about what he was doing. He might have been countersignaling. He might have been saying: Look, game-theoretic models in social science have nothing to do with reality, and anyone who says they do is just trying to signal how counterintuitive and clever they can be. I, on the other hand, can afford to admit that those models are just mathematical games with no meaning, because I'm actually smart enough to hang with mathematicians.

Not-crowded-enough pricing

Congestion pricing is a great idea. But, it's bound to be hard to enact politically: how do you explain to your constituents that a price hike is actually good for them? Well, you could try to re-frame the idea as a price cut for off-peak riding.

Saturday, October 17, 2009

Department of self-reference department

Previously I blogged about Robin Hanson's idea of not trusting results of direct studies. More precisely, the idea is that whenever you want to learn what the correlation is between some variables Y and X, you shouldn't look at studies in which X is the main variable of interest (let's call these direct studies) as they're likely to be biased, but instead at studies in which X is a control variable (call these indirect studies). I'll call this the Hanson Hypothesis:
Results of direct studies are biased, whereas results of indirect studies are not.
Now think about this: what if we wanted do an empirical study of the Hanson Hypothesis? That is, we want to find out the effect of variable X (whether a given study is direct or indirect) on Y (the quality of study's results). Can we do that? We can't, because we'd be treating X as a variable of interest and therefore conducting a direct study, and results of direct studies are biased (by Hanson Hypothesis).

So far so good. Now let's make it a bit more complicated by stating what I'll call the Precise Hanson Hypothesis:
Most results of direct studies are biased in the direction of researcher's prior belief about those results.
That is, if a researcher believes that, say, obesity is bad for health and then conducts a study on the effects of obesity on health, his results will show that obesity is bad for health even if in fact it's not. Now think about testing the Precise Hanson Hypothesis. Let's say you commission testing it to some researcher and he gives you his results. Can his results be informative? Well, that depends on his prior beliefs. If before conducting his study he believed that the Precise Hypothesis was false, his results will be of no use to you. If the hypothesis is false, he'll come back with results that say it's false. But, if the hypothesis is true, he'll come back with the same results (because the results will be biased in the direction of his prior belief, which is that the hypothesis is false). However, if you ask someone whose prior belief is that the Precise Hanson Hypothesis is true (e.g. Robin Hanson), his results will be informative. If he comes back with a negative result, you'll know the hypothesis is actually false (for, if it were true, someone with a prior belief that it's true could only evaluate it in the positive). If he comes back with a positive result, you'll know the hypothesis is actually true (for, if it were false, anyone would come back with a negative result, regardless of their prior belief).

Let's make this weirder still. Now think about the Strong Hanson Hypothesis:
All results of direct studies are biased in the direction of researcher's prior belief about those results.
Can you test it empirically?

It turns out that you don't have to; the Strong Hypothesis is false, and it can be shown without any empirics. For suppose it's true, and suppose also that you ask someone who believes it's true to do an empirical study of it. He does the study, and his results are that the Strong Hypothesis is true. The question is: are the results biased or not? (By biased I mean that the researcher would return results
confirming his prior belief regardless of whether those beliefs were true.) Well, it's a direct study, so by the Strong Hypothesis it must be biased. This means that if the Strong Hypothesis were actually false, the researcher would still come back with results saying it's true. But if the hypothesis is false, it's impossible for anyone to have results saying it's true, so the study we commissioned can't be biased. In other words, it's an unbiased direct study, exactly what the Strong Hypothesis says can't exist. So the Strong Hypothesis is false. But we've just assumed it was true, so we have a contradiction, which means it actually is false. That's good news, I think.

Monday, October 12, 2009

Don't tell them what they're really working on

In a Bloggingheads conversation between Eliezer Yudkowsky and Andrew Gelman, there's a mention of an idea due to Robin Hanson. As far as I understand it, the idea is as follows: if you want to learn what the effect of variable X on variable Y is, do not look at studies that estimate the effect of X on Y. Instead, look at all the studies that estimate the effects of many different variables A, B, C, D etc. on Y that use X as a control variable. The reasoning behind this is that whenever a researcher sets off to estimate the effect of X on Y, he or she may already have a preconceived notion of what that effect is, and that notion is likely to bias the results; whereas, since no one is invested in what the effect estimates of their control variables might show, it is those estimates that are more trustworthy.

This is a great idea--and if correct, it has tremendous practical implications. For example, whenever governments want to learn what the effect of X on Y is, they tend to commission studies that estimate the effect of X on Y. Exactly the wrong thing to do. What they should do instead is to commission a bunch of studies to estimate the effects of a whole lot of other (meaning not X) variables on Y, all of them such that X would be an obvious control variable; then pool all those studies and see if they agree about X.

In other words, researchers shouldn't be told what the real purpose of the study is.

An immediate objection to this is that lots of times governments commission those studies not to learn the truth, but instead to reinforce their own preconceived notions. This might be true of hot, publicly debatable points; I can't imagine though that there are absolutely no situations whatsoever in which a government agency actually wants to know what the truth is. I bet such situations are especially common in agencies that do work that has very high stakes but the details of which are removed from media scrutiny and ideological debate (e.g. intelligence). But then again, maybe the method I described above is in some form already employed by such agencies; I wouldn't know.

Hanson's simple idea has many more interesting implications. Of which I'll write shortly.

First economics Nobel prize

Awarded to a political scientist, Elinor Ostrom. As far as non-economists getting this prize go, political scientists are pretty late to the party: the award has already been won by a mathematician, a law professor, a psychologist, and somewhat of a philosopher. But it's not really the first time that political science gets awarded the economics Nobel prize. The awarded work of economists Kenneth Arrow, Amartya Sen, and Thomas C. Schelling, is essentially political science going by different name.

Added: Freakonomics features a post about non-economist econ Nobel winners. I forgot about Hurwicz (background in law) and Aumann (another mathematician). And then there's von Hayek.

Thursday, October 8, 2009

Who's trading with us?

Below is a graph I made using WTO data:


First thing to note is that the financial crisis has caused an unprecedented slump in world trade. Second thing is that global imports do not equal global exports. How could this possibly be? Doesn't world trade = world imports = world exports, by definition? I see three possibilities:

1) Measurement error. (But then the weird thing is how closely those two volumes are tracking each other. You'd expect measurement error to be at least a little bit noisy. An error this systematic is easy to correct for.)

2) I'm missing something very obvious to someone who actually knows something about this stuff.

3) Unbeknownst to most of us, the Earth is trading with other planet(s) and running a slight deficit.

Why are there no posthumous Nobel prizes?

Nobel prizes can't be awarded posthumously. Suppose they could; how would things be different?

The way things are now, the worst case scenario is that a researcher deserving of the prize dies suddenly before having been recognized. (Of this the starkest example is probably John Stewart Bell, the author of one of the greatest discoveries of theoretical physics, who died of a stroke at 62 before being able to claim the prize that was rightfully his; but lots of slightly less outrageous omissions can be produced.) In order to minimize the probability of things like that happening, the Committee presumably disproportionately favors very old researchers (the reasoning being: X deserves the prize slightly more than Y does, but X is 43 and Y is 97 so we better give the award to Y while we still can). With the possibility of posthumous recognition, that would change. Cases similar to Professor Y would become less "urgent," so more of those scientists would get pushed down the line until they did actually die. And that's a cost, since being recognized while you're alive is certainly better than being recognized when you're dead (though the latter is not worthless; the Nobel prize comes with a considerable amount of money which you can leave to your loved ones). Another consequence would be that relatively younger and more deserving researchers would probably face shorter waiting times between finishing their Nobel-quality work and actually getting the prize.

Overall, I think having posthumous Nobels would be better, but the ultimate cost-benefit analysis depends on lots of details which I don't claim to know. And of course, the possibility of receiving the Nobel prize posthumously should be restricted to those researchers who were alive when the new rule came into effect. It wouldn't really be fair to contemporary scientists to have to all of a sudden face competition from Isaac Newton or David Ricardo.

Monday, October 5, 2009

A truly profound quote

From a somewhat unlikely source, Mike Tyson:
Everyone has a plan until they get punched in the face.
Eric Falkenstein calls it "inadvertently deep," which captures its essence perfectly. It seems to me that most of the famous quotes floating around are the exact opposite of that: trying very hard to appear clever, and failing miserably.

Monday, September 28, 2009

You're not transferring wealth if you're buying something with it

There's a weird meme around (similar to the one I've written about before, that renting is throwing money away). This meme is: the U.S. is transferring wealth to oil-producing countries. That is, we're giving those countries lots of money. (One example can be found here; to be sure though, I was not able to find sources proving that President Obama had actually said this. I do remember Senator McCain saying it during the 2008 campaign. But, again, don't have sources.)

Indeed, we are giving those countries lots of money. But we're quite obviously not transferring that money, because we're getting something in return. Oil, that is. Since we seem to value oil, this isn't really a transfer of wealth, but more of an exchange thereof.

I guess the lesson here is that those who rent an apartment or a house and pay to heat it are real chumps. Not only are they throwing away money on rent, on top of that they're also transferring money they pay on their utility bills.

Monday, September 21, 2009

Preferences dressed up as beliefs

This discussion at OB got me thinking about preferences masquerading as beliefs. Let me explain what I mean. I am sure most of us have heard the following argument in favor of employer health insurance mandates: mandates are good not just for the employees but for employers as well, because workers with coverage are more productive and therefore businesses providing coverage do better in the marketplace.

There are two curious things about this argument. First, if used in order to convince people who don't already agree (for whatever reason) that employer mandates are desirable, it is a terrible rhetorical strategy (that's because the statement is 1) false and 2) patronizing towards the very audience you're claiming you want to persuade). Second, it seems to be used mostly by people who think employer mandates should be instituted for normative reasons; even if they were shown that mandates are bad for business owners, they'd still want them as policy.

Political debate is rife with arguments like that, and they're used equally often by the left and the right. (The Laffer curve comes to mind here.)

Here's my main point: arguments like the one above can't really be used as a means of persuasion, but rather are a very elaborate way of preaching to the choir. But then there's another puzzle. As preaching to the choir goes, this seems an inefficiently elaborate way of doing it. If all you're doing is signaling that you prefer employer mandates to other people who prefer employer mandates, why not just say you prefer employer mandates and leave it at that? The effect of saying either thing is the same: you're showing people with different preferences that you're not one of them. Why then the need to dress up your normative preference as a positive belief? I see two possible explanations, the applicability of each depending on how much people who use arguments like that know about their own preferences. Both explanations involve two-dimensional signaling: you're signaling not just your preference, but some other quality as well.

First scenario: Suppose you're aware that your policy preference is a normative one. Now every once in a while in your social interactions you will have to signal that preference in order to locate like-minded people. But doing it each and every time by just saying "I think x is great policy" would soon get pretty boring. Instead, you might try and invent new and clever arguments for why everyone should believe that x is great policy. By doing that, not only are you showing that you support x but also that you're a very clever person. And since, by construction, you're not using those arguments to persuade non-believers, they don't have to actually work; all they have to do is appear clever.

Second scenario: Suppose that you're not aware of the fact that your policy preference is normative. Instead, you honestly believe that employer mandates are good for employers, and you try to convince them of that fact--even though your argument can't possibly work on them. Why would you do that? Answer: to signal not only the direction of your preference but its strength as well. That is, you're signaling which policy you prefer, and that you're deceiving yourself about the true reasons why you prefer it. Since self-deception can generate certain forgone benefits, this signal is costly and therefore more credible than just talk. Note that, for this to work, the listeners can't be aware that that's what's going on. Instead, they'd have to be programmed to perceive delusional arguments as inherently more trustworthy.

Even though it sounds more convoluted, I prefer the second explanation; here's why. If you know what your preferences are but express them as cleverly stated beliefs, then, on top of your preferences and your intelligence, you also inadvertently signal a certain degree of cynicism. Groups don't like cynics (even, or perhaps especially, the highly intelligent ones) because on average they're less trustworthy than non-cynics. So the inadvertent signaling of cynicism would probably defeat the message of group loyalty.

Addendum: One example of usage of the "businesses that insure their employees do better in the marketplace" argument is by San Francisco Supervisor Tom Ammiano.

Friday, September 18, 2009

All books are judged by the cover

This essay by Paul Graham contains two insights which are both surprising (at least to me) and undeniably true. 1) The publishing business is, and always has been, selling style, not content. Books are (and always have been) priced based on what the cover looks like, not on the quality of information contained in them. 2) The iTune store is making money not by selling songs to people, but by taxing them.

Graham's essay is a fantastic read. Based on content.

Saturday, September 12, 2009

Never say "never in a million years"

P=NP is probably one of the most famous outstanding problems of mathematics. Most researchers working on the problem believe that P does not equal NP, but no one has a proof. However, this post isn't about the problem itself, but about how wrong people can be about the strength of their own beliefs. Here's an anecdote from Richard Lipton:
I once had a long discussion with Ken Steiglitz about P=NP, while I was still at Princeton. Ken was and still is sure that P must not be equal to NP. Okay, I said to Ken, what are the odds that they are equal? Ken said that he thought that the odds were a million to one. I immediately suggested a bet. I did not ask him to "bet his life," but I did ask for a million to one bet. I would put up one dollar. If in say ten years P=NP had not been proved, then he would win my dollar. If P=NP was proved in that time frame, then I would win a million dollars from Ken. Ken said no way. After more discussion the best I could get out of Ken was 2 to 1. Two to one. That was the best he could do. Somehow that does not sound like a sure thing to me. Even a hundred to one was out of the question. Yet Ken was sure that they could not be equal.
Even factoring in things like risk-aversion or utility not being identical with money, there's no way that Steiglitz actually believes the odds of P=NP are a million to one. Apparently, people can deceive about the strength of their convictions not only others but themselves as well. It's evident in politics, for example, where rhetorical strength and apparent sincerity of professed beliefs are often taken at face value. If politicians were required to bet on their beliefs, it would probably turn out that they aren't as sure about the validity of their favorite policies as they claim to be.

Friday, September 11, 2009

Collective action problems in team sports

Below the fold is a video that could be used in a classroom, as it contains an extremely vivid example of a collective action problem. It's a clip from an international soccer game, a qualifier for World Cup 2010 between Slovenia (team in white) and Poland (team in red). Slovenia won the game 3:0, and the clip shows the second goal.



It's a counter-attacking goal, but a very unusual one in that the defending team actually has an advantage in numbers (usually it's the other way around). What this means is that as counter-attacks go, this one is fairly easy to defend against. But the defenders somehow just never get around to it. Even someone who doesn't watch a lot of soccer probably gets the impression that the defenders didn't really do all they could to prevent the goal. One of the announcers is saying that they're just too slow, not athletic enough. It's true that Polish defenders aren't exactly demons of speed, but this is way off. No one actually runs this slow. What happened was that they got stuck in a four-player Prisoner's Dilemma. Each of them was thinking: "I'm not going to run faster because if I did, I might actually catch up to the attackers, and then I'll be forced to make a defensive play, and if I screw that up, everyone will think the goal was my fault. There are three of my teammates close by, why doesn't one of them run faster and try to do something." As a result, no one did anything.

Team sports seem like a setting in which collective action problems just run rampant. I wonder what techniques coaches use to deal with them.

Sunday, September 6, 2009

You're not throwing money away if you're buying something with it

Whenever the topic of owning versus renting a house comes up, one oftentimes hears the following argument: owning is always better, because with a house you acquire an asset, whereas renting is basically just throwing money away. Of course, this is absurd: if you're a renter, your rent money is buying you something valuable, namely a roof over your head. You can only seriously claim that renting equals throwing money away if you place no value whatsoever on housing, in which case you should stop worrying about whether to rent or own and go live in a tent or something.

Curiously, this type of argument does not seem to me to be applied to other types of rent/own decisions. Whenever someone is considering buying a car, for example, they tend to say something like "I really need a car, and in the long run it will be cheaper for me to own one than to keep renting" rather than "I need to buy a car because then I'll at least have something of my own instead of just throwing my money away on renting a car whenever I need one."

Housing is a good just like any other. To most people, it is actually one of the most valuable ones. Why is it then that renting housing is perceived as a waste of money while renting other, less valuable things, is not?

Friday, August 28, 2009

There are only 10 types of people in the world

...those who understand binary numeral system, and those who don't. (Sorry about the old joke; couldn't help myself.)

Matthias Wandell is most definitely one of those who do. He designed and constructed a calculator capable of adding 6-bit positive binary integers (that is, integers between 0 and 63). There wouldn't be anything extraordinary about it if not for the fact that the machine is made of wood and marbles. Below the fold is a video with some demonstrations; it's strangely mesmerizing.



Thursday, August 27, 2009

Elaborate waste of human intelligence

Raymond Chandler once wrote of a particular chess problem that it was "as elaborate a waste of human intelligence as you could find anywhere outside of an advertising agency." Chandler may have been right in his day, but his observation holds no longer. Today, the most elaborate waste of human ingenuity is not designing chess problems that go nowhere or managing political campaigns. It's not even coming up with new financial securities based on bundles of mortgages. It's pathological programming.

What's pathological programming? Normally, computer programming languages are designed so that it's as easy as possible to get the machine to do what you want it to do (given the complexity and range of tasks the language is meant to be dealing with). Pathological languages are designed with the sole purpose of exemplifying structures that are as pointlessly bizarre and/or complicated as possible. It's not an easy task. It actually takes great intelligence to design a computer language to be as stupid as possible.

Perhaps the simplest program you can imagine is a "hello world" program: a bit of code that just prints the words "Hello, world!" on the computer screen. So in BASIC, a "hello world" program is simply

PRINT "Hello, world!"

whereas in R it would be

cat("Hello, world!\n")

And so on. How do pathological languages deal with this task? Well, one of them (called "Chef") has a syntax that requires programs written in it to look like cooking recipes; a "hello world" program written in Chef looks like this:

Ingredients.
72 g haricot beans
101 eggs
108 g lard
111 cups oil
32 zucchinis
119 ml water
114 g red salmon
100 g dijon mustard
33 potatoes


Method.
Put potatoes into the mixing bowl.
Put dijon mustard into the mixing bowl.
Put red salmon into the mixing bowl.
Put oil into the mixing bowl.
Put water into the mixing bowl.
Put zucchinis into the mixing bowl.
Put oil into the mixing bowl.
Put lard into the mixing bowl.
Put lard into the mixing bowl.
Put eggs into the mixing bowl.
Put haricot beans into the mixing bowl.
Liquify contents of the mixing bowl.
Pour contents of the mixing bowl into the baking dish.


Nice, huh? Then try it in Homespring, a language with syntax that forces programs to look like absurd poetry:

Universe of bear hatchery says Hello. World!.
It powers the marshy things
the power of the snowmelt overrides.


And here we come to the one pathological language that, to me, takes the cake. The true winner of the "most elaborate waste of human intelligence" prize: Malbolge. This language has a structure that is (purposefully) so unintuitive, so devoid of transparency and logic that its own designer was unable to write a successful "hello world" program in it. In fact, as noted by Mark Chu-Carroll (author of the blog I'm linking to in this post), the only way that anyone was capable of writing a "hello world" program in Malbolge was through designing a genetic algorithm that succeeded in creating one.

Before seeing Malbolge, I wasn't aware that such elaborate pointlessness was even possible.

Wednesday, August 26, 2009

Time for blood

Healthcare Economist notes that the recession decreased the number of blood donations, and wonders why that is. Recession means more unemployed; when you're unemployed, the opportunity cost of your time is lower, so if you're someone with any inclination to donating blood at all you should be more willing to do it now.

I think it makes perfect sense that blood donations decreased because donating blood is something you're much more likely to do when there's other people watching you (and remembering if you did or did not participate in the recent blood drive). Donating blood is less of an individual imperative and more of a social norm; and it's much harder to break a social norm when the society is actually watching. In that respect, it's a bit like voting. I'm too lazy to look for actual statistics right now, but I bet that employed voters turn out better than the unemployed.

Unfortunately, the current drop in blood donations can't be taken as evidence for my "social pressure" argument because I don't think it's true that the opportunity cost of the act is lower for the unemployed. Lots of employers (especially government agencies) provide ample comp-time for donating blood and, at least to me, a three-hour break is much more valuable when it's a break in a work day rather than a break in a game of FIFA 2009 or something.

Someone should do an experiment.

Wednesday, August 19, 2009

Pet peeve: making up numbers

A few days ago while channel-surfing I accidentally landed on C-SPAN and watched one of the famous "Town Hall Meetings" about healthcare reform; that particular one was in Towson, Maryland. (I don't remember the name of the participating Senator.) At any rate, here's what stuck to my memory: a member of the audience asked a question about the costs of public insurance fraud; he said that the current costs of Medicare/Medicaid fraud are estimated at $20 billion, and so what are we going to do when, after enacting the public option, those costs go up to $1 trillion?

Seriously: why is it that some people think that if their statement contains a number it somehow becomes more credible, even if that number is quite obviously entirely made up? Who in their right mind would believe that after enacting the public option, the costs of insurance fraud would somehow jump by three orders of magnitude? And since we're making this sh*t up as we go along anyway, why stop at trillion? Why not quintillion, or googolplex, or $24.56?

The answer to the last question is probably that, if you're engaging in pretentious number-dropping, you'll want your number to sound "scary but not too scary" to someone stupid enough to think it has any justification to begin with.

Basketball possessions are like cars

Here's a fascinating paper making an analogy between basketball possessions and a well-known networking problem. The well-known problem being: suppose there are two cities A and B with two roads connecting them, one of them being a highway and the other being an alley shortcut. On the highway, a trip from A to B always takes ten minutes, no matter how many cars are on it at any given moment. The length of the same trip on the alley depends on how many cars are traveling with you: if you're by yourself, it'll take you one minute; if there's two cars, each will travel for two minutes; if there's three, the trip will take each three minutes, and so on. Suppose also that there are ten cars, and all drivers sequentially decide which road to take, but without knowing how many drivers are ahead of them in line. Then, the pure strategy Nash equilibrium is that all drivers pick the alley, and all of them travel for ten minutes. However, if there were a "central authority" that could force five drivers to pick the highway, and five to pick the alley, the total driving time would be reduced.

Now the analogy in the paper is: basketball possessions are like cars (the goal of each of them is to get from some starting point A to B, B being the basket), and different possible plays are like different roads. Some roads have higher initial efficiency; for example, Kobe Bryant shoots better than Derek Fisher. However, like with the shortcut alley, that efficiency is decreasing with use; the more possessions end with Kobe shooting, the more Kobe is defended against, so it is sometimes optimal for a team to make their best shooters shoot less than they actually do.

So far so good. However, like all game-theoretic arguments about sports that I've ever seen, this one also turns on one crucial assumption, which is that all sports teams care about is maximizing the probability of winning. I think this assumption is not true. Sure Lakers fans want Lakers to win; but they also want to see Kobe shoot a lot, and if Kobe shooting a lot decreases the probability of a Lakers win somewhat, well that's just the price that fans are willing to pay for a good show. So Kobe shooting a lot is not necessarily an inefficiency. The coaches probably know what they're doing: they're giving the audience what it wants.