The calendar has turned to November, and that means college basketball is upon us. The season kicked off on Monday, November 4th, headlined by Gonzaga trouncing Baylor 101-63 in Spokane. This season in sure to be unlike any other before it. Off-season conference realignment has reshaped the college basketball landscape, and the ever-intriguing impact of NIL and the transfer portal never fails to provide excitement and some surprises. Similar to what I did for college football, I created a model to establish a full ranking of college basketball teams heading into the 2024-25 season.1
A Quick Overview of My Ratings
Essentially, a team’s pre-season rating is an aggregate of their pre-season ratings from Bart Torvik, KenPom, EvanMiya, plus their final KenPom rating from last season regressed to the mean by approximately 20%, then scaled to an ELO range.2 I utilized the R package hoopR for assistance in wrangling the data and developing the predictions and subsequent forecast.3 Because each of these rating systems use a slightly different scale or metric, each rating was z-score normalized (to a mean of 0 and standard deviation of 1), averaged together, then scaled to an ELO range with a mean of 1505 and standard deviation of approximately 171.4 The result was a pre-season Elo rating for each team.
The Ratings and Rankings
The table below gives my full pre-season ratings and rankings.
Houston tops the ranks, with two-time defending national champion UConn coming in at 8th after losing stars Stephon Castle, Donovan Clingan, Tristen Newton, and Cam Spencer to the NBA. Duke, led by Cooper Flagg, arguably the most highly-touted recruit in over a decade, lands just behind Houston in the two-spot.
The Biggest Movers
Below ranks the teams with the biggest improvement between their pre-season ELO rank from my projections, and their final KenPom ranking from last season.
Full Season Simulation and Forecast
Using those pre-season ratings, I simulated the season over 2,000 times to forecast how the season will unfold.5
Each team started the season with their preseason ELO rating, as identified above. From there, I ran a Monte Carlo simulation, which simulated the season 2,000+ times using the win probability derived from the difference in Elo ratings between the two teams. Some adjustments were made to a team’s pre-game Elo based on a few factors. For instance, the home team received an additional 55 points of Elo - assuming the game was played on their home court. These ratings were run hot, meaning that within each of the simulated seasons, ELO was adjusted for each team after each game based upon the game’s result. As a reminder, Elo is a closed system in which every point of ELO gained by the winner comes directly from the loser’s ELO rating. ELO was adjusted by a factor (the ‘K-factor’ in ELO jargon) of 32. The higher the factor of K, the more sensitive the adjustment is to recent outcomes.6 The home-field advantage adjustment and K-factor were based on research by FiveThirtyEight and their now-defunct NFL predictive model, with tweaks specific to college basketball given that home-court advantage is a bit stronger in college basketball than college football, and a slightly higher K-factor may be appropriate for college basketball.7 What resulted from this were projected records and final ELO ratings for all DI college basketball teams.
Who’s Going Dancing?
Based on these ratings and simulations, what can we predict the March Madness field to look like in 2025?
Automatic Qualifiers - Conference Champions:
At Large:
First 4 Out:
Florida
Minnesota
San Francisco
Dayton
Coming Up…
I am currently working on a predictive model, based off of these ELO ratings, for individual games. Hoping to finish this in the next week or so, and if you subscribe, you will receive an article introducing the model (along with maybe some more insights as the season gets underway). These ratings will be updated regularly throughout the season, and can be accessed on the new College Basketball Rankings tab on my Substack’s homepage.
West Georgia and Mercyhurst, which are transitioning from Division II, did not have required inputs to model, so both were given a rating of the lowest modeled team.
As determined by the correlation between a team’s final KenPom adjusted efficiency margin in a season, and their final adjusted efficiency margin in the following season.
Saiem Gilani. hoopR: The SportsDataverse's R Package for Men's Basketball Data. Retrieved from https://hoopr.sportsdataverse.org
Based on research on other ELO rating systems, particularly that produced by WarrenNolan.com for college basketball.
Standard practice is generally 10,000 simulations, however I found in my college football season prediction simulation that the model stabilized far before 10,000 iterations. Plus, given the number of games and teams in college basketball, it was taking a VERY long time!
Some research has suggested a higher K-factor is appropriate for college basketball compared to the K factor of 30 I sued for football.
Pollard, Richard, and Miguel Gomez. Comparison of Home Advantage in College and Professional Team Sports in the United States, Croatian Anthropological Society, 31 Dec. 2014, core.ac.uk/download/pdf/91970875.pdf.