Summary, Judgment

Citation Rankings and the Human Touch

Legal ScholarshipWilliam Baude

I was pondering Adam’s post last Friday about measurement error in law school rankings and then I thought about his posts earlier in the week about human v. computer judges and referees. I wonder if those latter posts provide the best approach to the citation/rankings problem.

Given the imperfections and transparency of citation rankings, they will be gamed in troubling ways. But they still provide important objective evidence that is missing from the current rankings system. Maybe the solution is this: Give the faculty citation counts to some humans, and ask them to use the citation counts to decide scholarly ranking. We could do this with the current survey group for scholarly reputation at US News, or we could do it with a different group of people if we trusted them more for some reason.

The advantages are obvious. The human beings could average, generalize, or combine across multiple rankings systems, and could take into account . They could make some of the tradeoffs Adam describes between junior and senior faculty. And they’d make it harder to game the rankings, because they’d be able to adjust for apparently strategic behavior.

Of course, the problem is that the humans probably wouldn’t be objective enough, and that plenty of humans probably don’t agree that citation counts are all that relevant to scholarly quality, so they might refuse to cooperate in the project. Still, just like asking judges to use data to assign sentences, it might be the best we can do.