2026 March Madness

Click the link below for a list of rankings featuring only models and formulas that I have created. This page will serve as both an index, for learning a bit of background about each of model, and a discussion area where I will write about some interesting notes and my takeaways.

CLICK HERE FOR SPREADSHEET WITH RANKINGS

Index:

Karn Composite: This is simply the average rank of a team across all of my models and formulas. This model is in its 1st Year. It was inspired by the Massey Composite of computer rankings.

kWins (50/50): To learn about kWins, you can click here. But the reason there are three different ones versions of kWins is because of how the SOR Diff portion is calculated. 50/50 represents the predicted win percentage of games in each quadrant. If Team A plays 4 games in each quadrant, the 50/50 calculation of the SOR portion would say that an average team would be expected to win 2 games of the 4 in each quadrant. This model is in its 2nd Year. It correctly predicted Florida to win the National Championship in its maiden season in 2025.

kWins (Gradient): The difference between Gradient and 50/50 is how the SOR Diff portion is calculated. In Gradient, a team is expected to win 25% of Q1 games, 50% of Q2 games, 75% of Q3 games, and 100% of Q4 games. This model is in its 1st Year.

kWins (AL10N): This model is the same as kWins Gradient except each teams’ wins in its last 10 games are adjusted based on conference strength. AL10N stands for Adjusted Last 10 New. The new was for me to remember that its base is not the original kWins (50/50). It also helps that now it looks like the word “alien.” When you read this, you can call it kWins Alien. That’s how I say it in my head. This model is in its 1st Year.

kWins tSOR: This is the Strength of Record portion of kWins. Teams are rewarded for playing (and winning against) higher quality opponents, as calculated by RPI Quadrants. Despite being calculated last year, it was not used to create a bracket, so this model is in its 1st Year.

kWins Per Game: This is simply a teams’ kWins (50/50) divided by the number of games played. This model is in its 1st Year.

XPoint Standard SOS: This is both one of my favorite and one of my oldest models. If you want to learn about its specifics, you can go to my Previous Work page, and read my College Basketball Research Articles. It was the first model I created that set out to predict the final score of a given game. The difference between XPoint Standard SOS and XPoint ptSOS is the way the Strength of Schedule portions are calculated. The SOS portion in this model is one I created myself where I gave coefficients to each teams number of games in the 4 RPI quadrants. This model is at least in its 6th year. Though it has slightly evolved over the years, I treat it as the same model. Its first appearance would have been the 2019-2020 tournament had it not been cancelled.

XPoint ptSOS: This has the same base as the XPoint described above. The only difference is the Strength of Schedule in this model is created from the kWins SOR formula. Instead of a given team’s Strength of Record based on their actual record, it uses what I call a “perfect team’s SOS” (ptSOS). It assumes that the team wins all of their games. This model is in its 2nd Year.

XPoint ptSOS Def: This model uses the same SOS as the one above but there is actually a difference in the base of the formula. A team’s 2pt% and 3pt% allowed are factored in to more accurately predict their opponent’s shooting percentage in the game. This model is in its 2nd Year.

FourPoint: This is an offshoot of the XPoint, except it uses per 100 possession stats instead of game averages, and it uses 4 scoring categories instead of 3; splitting 2-pointers into Near Proximity and Mid Range. It uses statistics from Haslametrics. This model is in its 1st Year.

AFT: This is the oldest model I have on record. It made its first appearance in the 2018-2019 season. I believe that XPoint is older, but I do not have proof of that because the first few years of research was saved on a high school email’s Google Drive. This model is actually only in its 5th Year, due to it not appearing from 2020-2022. You can also learn about this model on my Previous Work page.

AFT Matchup: This model is simply a matchup base formula of AFT, meaning each team’s numbers change based on their opponent. It uses the same base stats. This model is in its 2nd Year.

TOA Margin: This is a new model, in its 1st Year, and it took me about two years to figure out how to make it. Then I came across the site Haslametrics, and it was exactly what I was looking for. TOA stands for Three Outcome Approach, and I felt that it was the way teams should be trying to optimize their offense. The three outcomes are free throws, easy baskets (dunks, layups, tip-ins), and three-pointers. The TOA Margin is calculated by subtracting how many TOA points a team allows from how many they score. It is used as a ranking system, similar to KenPom’s Net Efficiency Rankings.

TOA Matchup: This model, using the same base as above, is different in that it simulates a game between two teams, rather than one team’s own TOA Margin. This formula can predict the amount of TOA points in any given game. This model is in its 1st Year.

4 Way Game Score: One of my most theoretical and mathematical formulas, I grade each game played using a formula with the game’s margin of victory and what the average team’s margin of victory/defeat would have been. This model uses KenPom’s efficiencies and tempos. The scale for grades is roughly -12 to 12. This model is in its 1st Year.

FACTOR: This model is based off of the Four Factors of Basketball: EFG%, TO%, Off Reb%, and FT/FGA. It uses each teams statistics in these four categories to predict which team will win a given game. It is categorical, so it does not matter how much better a team is in a certain category. If each team “wins” two factors in a game, the higher seed advances. If each team “wins” two factors, and they are the same seed, the team with the higher sum of the Z-Scores in the four factors advances. This model is in its 2nd Year.

ZSCORE SUM: This model is different than the sum of the Z-Scores described above. The idea of summing Z-Scores in a model first appeared in my research for the 2019-2020 season, but the tournament was cancelled. In the 2020-2021 tournament, I used a Z-Score Sum model, but it had different statistical categories than the current version. The current version of this model is in its 4th Year. The five categories selected in this model span many different aspects of the game: offense, defense, resume, etc.

MATHFACTOR: This model uses 3 of the 4 Factors, described above, but it uses their raw values rather than their ranking. It also uses RPI’s SOS portion to help solve the “good teams in bad conferences bias.” I am almost always trying solve that. This model is in its 1st Year.

BC-Esque: This model is based off of the forever elegant BCS Rankings from College Football. To read a little about them, and why we should go back to using them, you can click here to read my article. This is called BC-Esque because it is my version of the same Ranking format. When the bracket is released, and the 68 teams are selected, the NCAA also releases their 1-68 rankings of the teams. They call it the 1-68 seed list. This will serve as 1/3rd of the rankings. The computer rankings portion will use 6 of my favorite computer rankings, and the highest and lowest of the 6 rankings for each team will be dropped. The average of the middle 4 rankings represents 1/3rd of the rankings. The final 1/3rd will come from my personal 1-68 rankings. This is something I have never attempted before, as I have always stayed away from adding non-statistical influences into my models. Also of note for this model, I did not use any of my own models for the computer rankings portion, as I felt that it might increase my personal influence on the result. This model is in its 1st Year.

Beatles (Poll): I sent a question out to my friends and asked, “What is the most important thing in predicting which team will win a given NCAA Tournament game?” I got lots of different answers, in fact no one answered the exact same thing. Some answers were resume-based: Close Game Win%, Average Scoring Margin, and Q1 wins. Some were more focused on defense: fouls committed per game and opponent turnover%. And finally, some were more focused on team-building such as D1 experience and Minutes Continuity. These are just seven of the twelve statistics used in this formula. It is called Beatles because of their song: With a Little Help from My Friends. This model is in its 1st Year.

Solidity Score and Fraudulent Formula: The Fraudulent Formula is something I created to try to mathematically determine which teams were likely to be upset early in the tournament. I went through the last 4 years of teams (seeded 1-6) that were upset in the 1st Round and looked for what they were weak in. I found 8 statistics that appeared multiple times throughout this process. As is the case with many of the other models, the stats found here cover all sorts of team aspects. There is a threshold per seed, and if a team’s fraudulent score is higher than its seed threshold, an upset is predicted to occur. The Solidity Score takes the Fraudulent Score and combines it with Haslametrics’ Rating Quality (RQ) to give a score of how solid a team is, meaning they are less likely to be upset. These models are in their 1st Year.

Upset Indicators: This is one of my favorite models, and it has proven to be quite helpful when picking a champion or teams to go far in the tournament. This is a very simple model, in that it does not rank teams, but rather looks at a small resume and predicts whether the team will lose in the first two rounds. It uses what I call “red flags.” If a team has two or more red flags, then it is predicted to lose in the opening weekend (Rounds 1-2). This is only applicable to teams seeded 1-6. This model is in its 7th Year, and began in the 2018-2019 season. In those 6 years, it has correctly predicted 27 of 37 teams to lose in opening weekend. No team that has been flagged has ever appeared in the National Championship game. Go to my research articles in Previous Work, to learn more about this model.