Since the 1980s and 90s, a variety of different computer rankings for college basketball have emerged. The purpose of the rankings is to evaluate teams in a concrete way. When the committee picks who will play in the NCAA tournament, there needs to be some way for them to sort all the teams in contention–and decide which teams deserve to be looked at.
Watching games can be a valuable tool, but the eye test only goes so far, largely due to the strength of the opponent. It’s also impossible for the committee to watch every team play every game. The idea that metrics are important isn’t particularly controversial, but which one to use has raised some discussions in recent years.
From 1982 to 2018, the NCAA officially used Rating Percentage Index–or RPI–to assess teams. This is a fairly basic index that uses a team’s wins and losses to rate teams against how difficult their schedule is. It provides a decent baseline, but it has some serious flaws, which is why the NCAA no longer uses it.
The biggest issue is that it doesn’t include margin of victory. If two teams play the same schedule and one team wins every game by 40 and the other team wins every game by one, they’ll be ranked the same, because their win-loss ratio is equal against their schedule. There’s more to a team than just wins and losses, something that RPI fails to account for.
On the other end of the spectrum are prediction metrics such as KenPom and the NCAA’s new metric, NET. The idea behind these metrics is that there is an expected score for each game based on games that have already been played. That expected score sets the baseline for each game. Teams rise and fall based not on whether they win the game necessarily, but how they do against the expected margin. If a team is expected to win by 40 and they win by two instead, they’ll go down. If a team is expected to lose by 40 but they lose by two, they go up.
This has its benefits and is better than RPI because it gives a better sense of what teams are generally better. However, the issue is with teams that are clutch, like last year’s Providence Friars. They won a lot of games close–maybe closer than they should have been–but they still came out victorious the vast majority of the time. NET and KenPom hated the Friars, but they were winning their games. They proved the metrics wrong in the tournament by making the Sweet 16, performing better than any computer predicted they would.
Looking at all the metrics, there are clear flaws on both sides. What if there was a ranking system that was the best of both worlds? Not to say it doesn’t have minor issues, but KPI is my pick as the ideal metric. The way it works is that for each game, a team gets a score from -1–+1. If the team wins, the worst score they can get for the victory is +0 and the best is +1. If the team loses, the worst they can do is a -1 and the best they can do is -0. Each game score is averaged out and teams are ranked from highest to lowest based on mean score.
The reason this is ideal is that it neither punishes teams for winning close games to less good teams, nor rewards teams for losing games closer than they should. You get a portion of a point for winning a game based on how good of a win it was and you lose a portion of a point based on how bad of a loss it was. It values wins and losses against strength of schedule like RPI, but takes into account margins and location of the game like KenPom or NET.
It can spit out scores that seem a bit odd sometimes, but in general, KPI works well. An example of this is when UConn only lost .15 points in their home loss to St. John’s, a number that should’ve been a fair bit closer to -1. Regardless, all computers have their blips and this is by far the best option, the reason why the NCAA should adopt it as its primary metric.