There’s been a lot of recent debate about ranking law schools based on their faculties’ citations. The U.S. News and World Report has announced plans to incorporate citations into their overall ranking, and Paul Heald and Ted Sichelman have just released a new paper providing exactly that kind of ranking.
Both of these rankings rely on citation counts from HeinOnline. (note: Heald and Sichelman also use SSRN downloads in their rankings.) As many have pointed out, relying on HeinOnline does not measure all of law professors’ citations. Instead, it measures citations to articles published in HeinOnline by other articles published in HeinOnline. If an article published by the Fancy Law Review is cited 100 times by articles published by the Prestigious Law Journal, this isn’t a problem. HeinOnline would pick up all 100 citations. And because most law professors publish most of their scholarship in law reviews carried by HeinOnline, this isn’t a problem most of the time.
But it is a problem some of the time. For instance, if a law professor publishes a book that receives 100 citations, HeinOnline would not pick up any of them. So law schools that have relatively more professors writing books are going to be lower ranked than they should be just because of how the citations are measured for the new rankings. In other words, the proposed new rankings have measurement error.
Of course, measurement error is a reality for anyone working with data, and normally researchers typically don’t get bent out of shape about it. This is because measurement error that is random might lead to distortions, but it’s not going to lead to systematic problems. And when the measurement error is non-random, researchers can just explain to readers the ways that the error is going to bias their results.
But there are a lot of researchers getting bent out of shape about the measurement error in the potential U.S. News and World Report rankings. And I’m one of them. This is because non-random measurement error in rankings creates the potential for gamesmanship. If rankings systematically undercount the work of people that publish in books or in journals that are not indexed by HeinOnline, there will be less of a market to hire these scholars.
This problem is exacerbated by the fact that so many aspects of U.S. News and World Report rankings are extremely sticky. Law school deans can’t snap their fingers and change the median LSAT scores and GPAs of the students that attend their schols. These things move very slowly over time. But they can try to hire scholars with more HeinOnline citations at the margins. The result is that non-random measurement errors in rankings will transalte into distortions of the academic labor market. This will in turn distort our core mission: the production and dissemination of knowledge.
If you care about the ranking debates, Jonathan Masur and I recently posted a short paper on SSRN where we explain this concern and lay out a few more. You should also check out Paul and Ted’s own paper where they explain the numerous steps they’ve already taken to reduce measurement error, and laid our their plans to reduce it even further in the near future. And, although I’ve got concerns about current measurement error in citation rankings, I want to end by saying Paul and Ted are being extremely thoughtful about how to produce rankings as transparently and accurately as possible.