The 2021 U.S. News Law School Rankings were just released. Typically the release of these rankings leads to a round of criticism, where people point out that the rankings incorporate the wrong things or that they don’t do a good job of measuring the things they do incorporate. Many of these criticisms are valid. But the fundamental problem with the U.S. News rankings is not what they measure or how they measure it. The fundamental problem is false precision.
Every year when these rankings are released, some law schools jump up in the rankings and some law schools fall down in the rankings. This kind of churn is good for the ranking’s publisher because changes — and the possibility of changes — generate attention. For example, Above the Law’s post on the new rankings has the headline: “The 2021 U.S. News Law School Rankings Are Here. Check out some of the largest rankings tumbles and gains. Yikes!”
But almost all of these changes are just noise. Most of the year-to-year movements are not because anything meaningful has changed at the schools, but instead because of slight differences in a few variables. For instances, a few admitted students with lower LSAT scores can move a school’s median number down and trigger a drop in the rankings. This problem is exacerbated by the fact that there is measurement error in many of the concepts (like academic reputation) that the rankings are trying to quantify.
Given these problems, the law school rankings should be made less granular. It’s misleading to say that we have any confidence in the exact position of a law school in a given year (e.g “the University of Springfield Law School is the 35th best in the country”), but it may be possible to say a range with some confidence (e.g “the University of Springfield Law School is between the 25th and 50th best in the country”). Or, to put it in statistical terms, it is a mistake to focus on the point estimate instead of on the confidence interval.
Of course, if law schools were ranked in buckets where there are actually dividing lines, there would be some trade offs. For instance, it would take schools years to change buckets, and the costs of falling into a lower bucket would be higher. But a ranking based on buckets would present a more accurate picture of the world. And, as an added benefit, it would help stop the embarrassing practice of having to watch administrators take credit for minor gains or explain away minor falls each and every year.