Behind the screen

Blink is a wonderful book. So wonderful that I tend to think that every good story I ever read about bias, decision making and a number of other topics must have been in there. I just spent several hours trying to find one of those stories. Unfortunately, I couldn’t – that will be another story. Fortunately, I found another one just about as good.

People are biased. They are biased toward certain genders, or ethnic groups, or nationalities, or heights, or hair colors. Orchestra conductors appear to be especially biased, maybe because of the power they hold. In particular, many conductors appear to have very precise ideas about which people can play which instruments and which ethnic groups can play which compositions. As Gladwell tells it (page 245-250), the use of screens to hide the players during auditions has resulted in a dramatic increase in the number of female musicians in top orchestras (see this).

Recently, I read a wonderfully witty commentary in The Guardian written by Phil Ball. The commentary is “about the h-index, a number that supposedly measures the quality of a researcher’s output.” (You should not miss all the double entendres). As Phil writes, the h-index “imposes a one-size-fits-all view of scientific impact. There are many other potential faults: young scientists with few publications score lower, however brilliant they are; the value of h can be artificially boosted – slightly but significantly – by scientists repeatedly citing their own papers; it fails to distinguish the relative contributions to the work in many-author papers; and the numbers can’t be compared across disciplines, because citation habits differ.”

Phil also gives voice to the worry exposed by many that the “h-index is part of a wider trend in science to rely on metrics – numbers rather than opinions – for assessment. For some, that’s like assuming that book sales measure literary merit. It can distort priorities, encouraging researchers to publish all they can and follow fads (it would have served Darwin poorly). But numbers aren’t hostage to fickle whim, discrimination or favouritism. So there’s a place for the h-index, as long as we can keep it there.”

My personal opinion is that much is said of people not wanting to be measured by a single number, or that nothing compares to review by one’s peers. From where I sit, those worries are slight when compared to worries I have about the effect of the biases of the peer reviewers or the rarely acknowledged error bars on such evaluations. Yes, the h-index has limitations; yes, its properties are still not well understood; yes, it can be gamed some. At least I can use it to check my own biases.

— Luis Amaral