We had a group at Summit who were interested in looking at how the championship results may have changed and what the four team playoffs may have looked like if the College Football Playoff system was in place during the BCS era. Basically, we want to provide yet another method for college football fans and institutions to dispute national championships. So, we've outlined our prior year playoff simulation in this article. We ran the simulation 1000 times to determine how teams would perform in the CFB playoff, if such a playoff existed during the BCS era.
Determining Which Teams the Committee "Selects"
The first hurdle we needed to clear was our committee selection criteria. We know that the committee did not use advanced metrics to evaluate playoff teams, so we had to use more "traditional" criteria to determine whether a team would make the playoff. We started off by broadly defining our possible playoff contenders as the top 10 BCS-ranked teams in each year. Then, we determined which set of teams we thought would always make the playoff. These teams we made "immune" to any subjective evaluation, and were assigned 100% chance of being selected for the playoff. We made the assumption that teams that exhibit any one of the following traits would automatically be included in the playoff:
- Undefeated teams from major conferences (to include Notre Dame);
- Major conference champions ranked 2nd or better in both the AP and Harris polls;
- Major conference champions ranked no lower than 3rd in the BCS computer models, the AP poll, and the Harris poll; or
- Undefeated teams ranked in the top 4 of every poll.
In the past few years, two to three teams have hit the "automatic inclusion" category each year. When we look at 2014, that pretty much makes sense. Alabama, Oregon, and Florida State were no-brainers for the playoff. For our analysis, going back to 2005, 2009 was the only year that had more than three teams hit our automatic selection criteria (there were five "automatic" teams that year, so we made minor adjustments to our analysis so all five had some chance of making the playoff). After identifying our automatic qualifiers, we ranked them simply by designating the team that hit the most of our automatic inclusion criteria the #1 team. Any other teams triggered our automatic inclusion criteria were ranked subsequently based on how many of the criteria they hit.
For the remaining teams that may be in contention for a playoff spot, we had to come up with a model to simulate committee selection. We assumed that a team's BCS ranking would be a key indicator of their possible inclusion in a playoff. However, we wanted to introduce some randomness to the selection process in order to account for the uncertainty of any one committee member's vote.
To mimic the voting process, which is influenced by the BCS score but also subject to the judgment of individual committee members, we developed a proxy for the voting process by selecting 13 samples from the set of top 10 teams, with each sample drawn to reflect the vote of an individual committee member. We simplified the assumption a bit by assuming that the committee members weren't making a ranking decision, they were simply determining whether or not a team deserved to be in the playoff. We wanted our committee members to select teams with higher BCS scores more frequently, but we still wanted to have scenarios when lower ranked teams were selected, so that we could better model selection variability. To accomplish this, we took the BCS scores of each team that failed to meet our automatic selection criteria, and we built a range of "selection probabilities" for these teams. We used each team's final BCS score to generate selection probabilities proportional to their BCS scores. In the long run, this simulation would select the teams with the higher BCS ranking more frequently, but we successfully introduced sufficient variability to the selection process so that there were still scenarios when teams with lower BCS ranks were selected for our synthetic playoffs.
As a final note, in a typical season two teams hit our automatic selection criteria. Our synthetic committee selected the last two playoff slots by first "selecting" a "third seed" from the remaining eight teams. Then, the "third seed" team was removed from the pool, and we rescaled the selection probabilities of the remaining teams to allow the committee to "select" a "fourth seed" for the playoff.
In the tables, below, we list each team's probability of being selected for the playoff for the last four years. These committee selection probabilities help us determine a number of different win probabilities for each potential playoff team.
2013 Season
| Team | % Chance of Being in Playoff |
| Florida State (Actual Champion) | 100% |
| Auburn | 100% |
| Alabama | 79% |
| Michigan State | 50% |
| Stanford | 32% |
| Baylor | 15% |
| Ohio State | 13% |
| Missouri | 6% |
| South Carolina | 6% |
| Oregon | less than 1% |
2012 Season
| Team | % Chance of Being in Playoff |
| Notre Dame | 100% |
| Alabama (Actual Champion) | 100% |
| Florida | 76% |
| Oregon | 52% |
| Kansas State | 33% |
| Stanford | 14% |
| Georgia | 11% |
| LSU | 9% |
| Texas A&M | 2% |
| South Carolina | 2% |
2011 Season
| Team (2011) | % Chance of Being in Playoff |
| LSU | 100% |
| Alabama (Actual Champion) | 100% |
| Oklahoma State | 100% |
| Stanford | 51% |
| Oregon | 20% |
| Arkansas | 14% |
| Boise State | 9% |
| Kansas State | 3% |
| South Carolina | 2% |
| Wisconsin | 1% |
2010 Season
| Team (2010) | % Chance of Being in Playoff |
| Auburn (Actual Champion) | 100% |
| Oregon | 100% |
| TCU | 100% |
| Stanford | 44% |
| Wisconsin | 25% |
| Ohio State | 14% |
| Oklahoma | 7% |
| Arkansas | 7% |
| Michigan State | 4% |
| Boise State | less than 1% |
Who Wins the Playoff Matchup?
Once we selected the teams for the playoffs, we had to determine how the actual games would be decided. First, we examined what factors determine whether a team wins or loses. Then, using these factors, we built a probabilistic model based on the F/+ ratings of each playoff team for each season going back to 2005.
We didn't want F/+ alone to determine our game results, though, so we added some variance to each team's F/+ rating in order to determine the winners of our matchups. We also didn't have immediate access to all of the great data that Bill Connolly and Football Outsiders have to develop their S&P+ and FEI ratings, so we came up with a simple model to boil the game down into two components: how good a team is (its F/+ rating) and the consistency of the team (the variance in a team's performance over the season), which we defined through the development of a "Consistency Score." The "consistency score" attempts to proxy how consistently a team performs over the season. The 2013 Florida State Seminoles are a great example of an incredibly consistent team: they bludgeoned their opponents week in and week out on their way to an undefeated season, and a BCS national championship game victory. On the other hand, the 2012 Notre Dame Fighting Irish were wildly inconsistent. While they ended the regular season undefeated, their performances ranged from dominant to downright shoddy.
Developing a Consistency Score
We'll try to keep the math behind our consistency score fairly simple. To develop the Consistency Score, we first modeled each team's win probability using a simple logistic regression (logit) that relates yards gained, yards allowed, and turnovers to win probability:
Logit(Prob. Winning) = .010252*opponent-adjusted yards gained - .006391*opponent-adjusted yards allowed + .767746*turnover margin
We then used this model to find the standard error of a team's predicted probability of winning. We used that standard error by multiplying the each team's year-end F/+ rating by the ratio of the standard error (consistency value) for performance to overall performance (the result of the logit equation, above) to develop a proxy for the standard deviation of a team's F/+ rating, which is simply:
SD(F/+ Rating) = Year-end F+/ rating * (SE of predicted year-end performance/predicted year-end performance)
This was defined to be the "Consistency Score." After we obtained a particular team's F/+ rating and the Consistency Score (standard deviation) of its F/+ rating, we developed a probability distribution of F/+ ratings for each team. We set the average of the distribution to the team's F/+ rating, and its standard deviation to the team's consistency score. We randomly drew from this distribution 1000 times to determine a team's "consistency-adjusted" F/+ score, taking into account the team's consistency.
The "Competition"
To hold a synthetic matchup, we developed a win probability for each competing team by taking the ratio of each individual team's consistency-adjusted F/+ score, divided by the sum of the consistency-adjusted F/+ scores for both teams in the game. Then we drew a random number between zero and one, and if that number was less than or equal to the win probability for the favored team in the matchup, that team would "win" the game, and move on to the next round. This process was repeated until a champion was crowned.
A quick example:
| Team | F/+ Rating | F+/ Standard Deviation | Consistency-Adjusted F/+ Rating | Win. Prob. |
| Team A | .50 | .03 | .506 | 56.6% |
| Team B | .40 | .05 | .388 | 43.4% |
Before we discuss the results of our simulation, we wanted to consider the merits of the BCS versus the CFB playoff format. Undoubtedly, an extra game will introduce more randomness to the playoff results, which means the playoff is, potentially, worse at truly determining the best team in the country. However, this assertion depends on two assumptions: 1) the BCS always selects the best team in the country for the championship game, and 2) there is such a thing as a universal "best team."
If we're unsure whether or not we can determine the best team in the country from regular season results, the four-team playoff increases the likelihood that the eventual champion will be tested for various weaknesses that may not have come up during regular season play. However, if we are confident that the BCS typically selected the best team in the country, then the playoff essentially cuts that team's championship odds in half. For example, the 2012 Alabama squad (the most dominant team in recent history, per F/+), by our estimation, had a greater than 60% chance to win the playoff in the BCS format. In our simulation, that win probability dropped to 38%.
In order to look at the impact of an additional game, we took a look at the last 10 college football seasons and crowned the "national champion" based off of 1,000 playoff simulations. The tables below show our results. We've included more detailed results for the first four years of our simulation, so that you can get a flavor for the drivers behind certain results (namely, Oklahoma State's underperformance in 2011). The bolded team in each table is the actual BCS champion for that season.
2013 Simulated Playoff Results
| Team | Probability of winning the semi-final playoff game, given being selected in playoff | Probability of winning the championship game, given being in the championship game | Probability of winning playoff given playoff selection |
| Florida State | 58% | 59% | 34% |
| Auburn | 51% | 45% | 23% |
| Alabama | 43% | 56% | 25% |
| Michigan State | 46% | 40% | 18% |
| Stanford | 50% | 50% | 25% |
| Baylor | 42% | 41% | 18% |
| Ohio State | 47% | 36% | 17% |
| Missouri | 45% | 33% | 16% |
| South Carolina | 40% | 38% | 16% |
| Oregon | 0% | 0% | 0% |
In 2013, Florida State was the 2nd most dominant team compared to its peers in the past four college football seasons (as we said above, the 2012 Alabama squad was the most dominant). The Seminoles survived the first round 58.2% of the time. This may seem small, but in model simulations such as these, regression to the mean is common. Florida State's significant margin in "winning percentages given selection for the playoff" (the last column) shows that Florida State was indeed the best team this year. Remember, since we're simulating four teams, probabilities well above and well below 25% are pretty significant, so a 34% chance of winning is still pretty strong.
| Team (2012) | Probability of winning the semi-final playoff game, given being selected in playoff | Probability of winning the championship game, given being in the championship game | Probability of winning playoff given playoff selection |
| Alabama | 60% | 64% | 38% |
| Notre Dame | 36% | 51% | 19% |
| Florida | 45% | 51% | 22% |
| Oregon | 52% | 47% | 24% |
| Kansas State | 44% | 37% | 17% |
| Stanford | 44% | 43% | 19% |
| Georgia | 55% | 40% | 22% |
| LSU | 54% | 41% | 23% |
| Texas A&M | 30% | 50% | 21% |
| South Carolina | 20% | 100% | 30% |
Alabama was clearly the class of the field in 2012. The story here is "poor Notre Dame." Notre Dame, due to their poor F/+ rating, made the title game less than 50% of the time. When any other team from our sample makes the playoff, they outperform Notre Dame. I think it's safe to say that Notre Dame was the most overrated team to make a BCS championship in recent memory. Also of note, Texas A&M and South Carolina do well once they make the title game, though this is probably a small sample aberration.
| Team (2011) | Probability of winning the semi-final playoff game, given being selected in playoff | Probability of winning the championship game, given being in the championship game | Probability of winning playoff given playoff selection |
| LSU | 65% | 51% | 33% |
| Alabama | 59% | 58% | 34% |
| Oklahoma State | 41% | 47% | 19% |
| Stanford | 37% | 38% | 14% |
| Oregon | 42% | 37% | 16% |
| Boise State | 21% | 67% | 22% |
| Arkansas | 36% | 41% | 10% |
| Kansas State | 10% | 33% | 5% |
| South Carolina | 20% | 50% | 8% |
| Wisconsin | 30% | 33% | 8% |
2011 had an interesting set of 1st round results. Alabama had to take on a tough Oklahoma State team in every simulation, while LSU beat up on weaker teams, so LSU had a much easier time getting to the title game at a whopping 65%. However, since Alabama was a stronger team in general, they were able to win the championship game at a slight plurality over LSU. Oklahoma State underperforms because of our selection criteria framework, they end up taking on Alabama every. single. time.
| Team (2010) | Probability of winning the semi-final playoff game, given being selected in playoff | Probability of winning the championship game, given being in the championship game | Probability of winning playoff given playoff selection |
| Oregon | 50% | 47% | 24% |
| Auburn | 54% | 52% | 28% |
| TCU | 51% | 50% | 26% |
| Stanford | 49% | 52% | 25% |
| Wisconsin | 40% | 40% | 17% |
| Ohio State | 44% | 54% | 24% |
| Oklahoma | 37% | 85% | 32% |
| Arkansas | 41% | 48% | 21% |
| Michigan State | 38% | 33% | 13% |
| Boise State | 100% | 0% | 0% |
2010 definitely had the most chaos, which is somewhat odd since it's the only year of the last four with three unbeaten teams heading into the playoff. According to the F/+ rating, Boise State was likely the best team in the country, however its BCS rating was so bad that our synthetic committee only picked them twice. Beyond that, the results are all over the place, with Auburn being our most common playoff champion and Oklahoma showing the best title game performance (winning 85% of championship games they play!), despite only winning 37% of semi-final games, and despite only making the playoff 40 times. The parity in 2010 shows the weakness of the BCS model compared to the four-team model, but it also shows that, in these circumstances, the four-team playoff likely fails to go far enough.
So Which is Better: the BCS or the Playoff?
So, we can see that the playoff system, while giving the best odds to the best team, can drastically change the results of the BCS championships. The inclusion of a playoff allows teams with "fluke" losses to come back and have a chance at winning it all. If nothing else, the playoff system increases the pool of teams that can possibly win the championship. However, the two-game playoff system introduces significant randomness to our outcomes. Teams that were truly statistically dominant throughout the season, like 2012 Alabama, have their championship odds chopped in half. If we assume that the BCS system generally included the best team in the country, then we could argue that under the BCS system, the best team would win more frequently, and thus the BCS system is better than the playoff system, which introduces significant randomness to the result.
But that's not fun at all. The playoff system will definitely be more interesting than the BCS championship game.
As a final point of discussion, here are our simulation results going all the way back to 2005, showing each team's odds of winning the championship given selection by the college football playoff committee. In 2009, there were five undefeated teams, and each one triggered our automatic playoff selection criteria, so we only have a five team table for that year. For the rest, we show 10 teams as we did for 2010-2013.
| Team (2009) | Probability of winning playoff given playoff selection, based on prediction model |
| Alabama | 37.5% |
| Texas | 32% |
| Cincinnati | 10% |
| TCU | 19% |
| Boise State | 3% |
| Team (2008) | Probability of winning playoff given playoff selection, based on prediction model |
| Oklahoma | 25% |
| Florida | 36% |
| Texas | 16% |
| Alabama | 7% |
| USC | 9% |
| Utah | 2% |
| Texas Tech | 2% |
| Penn State | 1% |
| Boise State | 1% |
| Ohio State | less than 1% |
| Team (2007) | Probability of winning playoff given playoff selection, based on prediction model |
| Ohio State | 29% |
| LSU | 33% |
| Virginia Tech | 11% |
| Oklahoma | 12% |
| Georgia | 7% |
| Missouri | 3% |
| USC | 2% |
| Kansas | 2% |
| West Virginia | 1% |
| Hawaii | less than 1% |
| Team (2006) | Probability of winning playoff given playoff selection, based on prediction model |
| Ohio State | 25% |
| Florida | 31% |
| Michigan | 16% |
| LSU | 12% |
| USC | 6% |
| Louisville | 6% |
| Wisconsin | 2% |
| Boise State | 1% |
| Auburn | 1% |
| Oklahoma | less than 1% |
| Team (2005) | Probability of winning playoff given playoff selection, based on prediction model |
| USC | 31% |
| Texas | 31% |
| Penn State | 18% |
| Ohio State | 16% |
| Oregon | 2% |
| Notre Dame | 1% |
| Georgia | less than 1% |
| Miami | less than 1% |
| Auburn | less than 1% |
| Virginia Tech | less than 1% |
Author bio: Matt Duffy is a Senior Analyst with Summit, where he evaluates loan programs based on published guidance for Federal lending. Mr. Duffy holds a B.S in statistics from Sonoma State University. He can be found on twitter @iammattduff.