NRC Rankings have arrived

(Revised Oct. 3, 2010)

The long awaited for NRC rankings have arrived and, as expected, lots of controversy ensued. To avoid suspense, I will state up front that I am convinced that someone has finally gotten the rankings right. The notion espoused by the 1995 NRC rankings or the annual USN&WR rankings that one would be able to get infinitely precise rankings from noisy, unreliable, data is preposterous. Fortunately, the current NRC ranking exercise “embraces” the uncertainty and performs a well-grounded statistical analysis.

The NRC ranking exercise considers two main overall ranking schema. The Survey-based ranking relies on the selection of the most important criteria to rank a department within a given discipline according to a set of faculty in that discipline. The weights of the different criteria is then used to score all the department in the system.

The Regression-based ranking asks a set of faculty in a discipline to score departments in that discipline and then uses those scores to estimate the weights of the variables the raters are using. The two methods produce remarkably similar rankings, at least in chemical engineering (1).

So what do the new rankings tell us? First and foremost that size is not as important as one would surmise, even using the Regression-based ranking. Indeed, two of the top five departments (Caltech and UCSB) are on the small end of the spectrum.

Second, there are five departments at the top whose standing is essentially indistinguishable: Caltech, MIT, UC Santa Barbara, UT-Austin and UC Berkeley.

Third, if one wishes to coarse-grain the distinctions further and attempts to identify different groups of schools, the one could hypothetically identify three tiers covering the top 15 departments. However, since anything other than top tier is identified with “not so good,” I will instead talk of Outstanding and Excellent programs. All Outstanding programs have 95% of their estimated ranks fall within the top 7. The Excellent programs have more than 5% of their estimated ranks fall within the top 10 and 95% of their estimated ranks fall within the top 20.

Interestingly, while programs in the Outstanding group seem quite similar in their distributions of ranks, the Excellent group is less homogeneous and one might even be able to “see” two subgroups. However, the difference between the Outstanding programs and the top of the Excellent group is, in my view, larger than between the top and bottom of the Excellent group.

Indeed, note where the mode of the distributions are for the 4 institutions in the figure. For UT at Austin the mode is really close to 1, while for the remaining 3 institutions (Minnesota, CMU and Northwestern, from left to right) the mode is in the range 6 to 7.

So where does Northwestern rank? In my view, we are in that Excellent tier of schools, meaning that we are definitely within the top 12 of programs in chemical engineering, and coarsely speaking tied for #6. More important than that however, and unlike what the outdated reputational rankings of USN&WR would make us believe, there isn’t much difference in the quality of our program/faculty and that of the programs at Stanford, Princeton, Minnesota or Wisconsin-Madison.

— Luis Amaral

(1) Note that the data are resampled 500 times and the ranks of all the programs are recomputed 500 times thus producing a distribution of the ranks for each program. The figure shows the 5th to 95th percentiles of this distribution for each program yielding a 90% confidence interval.