You can now find full, updated 2009 S&P+ rankings at Football Outsiders — overall, offense, defense. (Actually, all of the S&P+ rankings have been updated, but we’re going to keep focusing on one year at a time. It’s a long offseason, right?)
What changes have I made?
I’m including this blurb in each of these posts, so if you read it previously, feel free to scroll down to the rankings.
First, with a better chance to analyze which statistical factors are most consistent from the beginning of the season to the end, I made some slight tweaks in the weighting of each statistical factor (the short version: efficiency carries even more weight now). I also worked marginal efficiency and marginal explosiveness into the equation.
Then, I implemented the changes I made during 2018 for previous years. From each week’s rankings post:
- I changed the garbage time definition. S&P+ stops counting the major stats once the game has entered garbage time. Previously, that was when a game ceased to be within 27 points in the first quarter, 24 in the second, 21 in the third, and 16 in the fourth. Now I have expanded it: garbage time adjustments don’t begin until a game is outside of 43 points in the first quarter, 37 in the second, 27 in the third, and 21 in the fourth. That change came because of a piece I wrote about game states at Football Study Hall.
- Preseason projections will remain in the formulas all season. Fans hate this — it’s the biggest complaint I’ve heard regarding ESPN’s FPI formulas. Instinctively, I hate it, too. But here’s the thing: it makes projections more accurate. Our sample size for determining quality in a given season is tiny, and incorporating projection factors found in the preseason rankings decreases the overall error in projections. (For previous years, from before I actually released any sort of preseason projections, I found the most predictive success by keeping a layer of five-year history within the ratings. It’s a small percentage, but it’s in there.)
- To counteract this conservative change, I’m also making S&P+ more reactive to results, especially early in the season. If I’m admitting that S&P+ needs previous-year performances to make it better, I’m also going to admit that S&P+ doesn’t know everything it needs to early in a season, and it’s going to react a bit more to actual results. Basically, I’ve added a step to the the rankings process: after the rankings are determined, I go back and project previous games based on those ratings, and I adjust the ratings based on how much the ratings fit (or don’t fit) those results. The adjustment isn’t enormous, and it diminishes dramatically as the season unfolds.
One more recent change had the most impact, however: I made S&P+ more reactive to conferences as well. It’s similar to step 3: after the ratings are determined, I project previous games based on those ratings, and I track each conference’s average performance versus projection. For the top conference, I found that by the end of the season it was aiming low by two or three points per game per team. For the bottom conference, it was the reverse.
By shifting each team’s rating based on this conference average, and by increasing the weight of said adjustment as the season progresses, it adds basically improves against-the-spread performance by about one percentage point per season and cuts the average absolute error by somewhere between 0.2 and 0.3 points per game. That doesn’t seem like much, but look at the Prediction Tracker results and note how much of a difference 1% and 0.3 points per game could make to your projective ranking there. It’s pretty big.
It does, however, mean a fundamental shift in how mid-major teams are judged. Not to spoil the suspense, but look at the difference this adjustment made in some 2018 rankings:
- Fresno State: originally ninth, now 16th
- UCF: originally eighth, now 18th
- Utah State: originally 19th, now 21st
- Appalachian State: originally 11th, now 29th
It’s a pretty harsh adjustment, though it both makes the numbers better and perhaps passes the eye test a bit more. So we’re going with it.
Wait, so you’re including previous years of history in each season’s ratings? How could that possibly be right?
Back when I mentioned at the beginning of 2018 that I’d be using priors in the rankings all year, I had an interesting conversation with some readers on Twitter about what happens at the end of the year. Would I be removing the priors for the year-end ratings so that a) we’d be evaluating teams’ performance based only on what happened in that year, and b) the “recent history” prior wouldn’t then carry influence right into the next year’s projections (since they would include the priors that were included in the previous year’s rankings)?
It was a legitimate question, one to which I don’t think there’s a right answer. To me, it simply comes down to this question: when you’re looking back at previous seasons’ ratings, what are you looking for?
To me, it’s usually to get a sense of who might beat whom, right? I understand the draw of a “this year only” evaluative look, but S&P+ is intended to be a predictive measure, and I’ve decided to include these priors because they make the predictions better. In that sense, removing those priors at the end of the season makes it less of a predictive measure (even though obviously any “predictions” made of a team from 2006 in 2019 is theoretical only).
(How will this impact the preseason projections moving forward? The weights of each of the primary projection factors — recent performance, recruiting, and returning production — are always based on updated correlations, meaning that it depends on how strongly each factor predicts the next season’s performance. I’m dumping the new S&P+ ratings into the old engine, and, well, we’ll find out. Maybe the recent performance numbers end up with a lower correlation. We’ll see.)
On with the rankings.
2010 S&P+ rankings
|San Diego State||9-4||8.9||40||34.3||31||25.5||53||0.0||64|
|San Jose State||1-12||-17.1||106||21.3||85||37.0||108||-1.4||101|
|New Mexico State||2-10||-29.4||120||10.1||119||40.1||116||0.5||47|
Hooray for (Auburn-based) randomness
On paper, the 2010 season should have probably played out like plenty of others did, with either Alabama playing for the national title (as it did in 2009, 2011, 2012, and 2015-18 and nearly did in 2008, 2013, and 2014), or Ohio State doing so (as it did in 2006-07 and would in 2014), or both. In a season that featured no truly dominant team — no one graded out higher than the 97th percentile (five did so in 2008, four would in 2011) — we could have still very easily ended up with a pretty standard national title game.
Instead, we got Auburn beating Oregon in a tight, funky game. And if one of those teams had slipped up in the regular season, we’d have probably gotten TCU instead. And we’d have almost gotten Boise State, too.
It took some impressive breaks for us to get here.
- Auburn’s win over Alabama featured 16 percent post-game win expectancy, and three other Tiger wins (Kentucky, Georgia, and the first South Carolina game) had probability between 53% and 68%. The chances of winning all four of those games, based on the stats each game produced, were about 4%.
- Another Bama loss — 24-21 to LSU — featured just 38% postgame win expectancy for LSU.
- Boise State’s lone loss, the classic game against Nevada, featured 8% postgame win expectancy for Nevada.
- Oregon won games with 34% (Arizona State) and 54% (Cal) postgame win expectancy.
If we play this season out again, Auburn goes about 9-3, and we maybe get a matchup of 12-1 Bama vs. 11-1 Ohio State for the title. I’ll take this version, thanks. The best team on paper didn’t win the national title, but the most dramatic did. Worth it.
Well hello there, Miami and Georgia
It’s not particularly unusual for a team with a mediocre record to rank pretty highly in S&P+, just as it’s not unusual for a team with a great record to grade out as a mediocre team.
That said, Georgia and Miami went a combined 13-13 ... and ranked 12th and 13th, respectively. That’s a bit much. So let’s walk through their seasons and try to understand why they ended up where they ended up.
Georgia (6-7, 12th)
(Opponents’ rankings below are their S&P+ rankings, in case it wasn’t clear.)
- Sept. 4: beat No. 118 UL-Lafayette, 55-7 (post-game win expectancy: 98%)
- Sept. 11: lost to No. 16 South Carolina, 17-6 (4%)
- Sept. 18: lost to No. 5 Arkansas, 31-24 (32%)
- Sept. 25: lost to No. 35 Mississippi State, 24-12 (35%)
- Oct. 2: lost to No. 67 Colorado, 29-27 (89%)
- Oct. 9: beat No. 43 Tennessee, 41-14 (100%)
- Oct. 16: beat No. 88 Vanderbilt, 43-0 (100%)
- Oct. 23: beat No. 50 Kentucky, 44-31 (94%)
- Oct. 30: lost to No. 26 Florida, 34-31 (57%)
- Nov. 6: beat FCS Idaho State, 55-7 (100%)
- Nov. 13: lost to No. 6 Auburn, 49-31 (32%)
- Nov. 27: beat No. 61 Georgia Tech, 42-34 (86%)
- Dec. 31: lost to No. 53 UCF, 10-6 (33%)
So they went 0-5 against top-40 teams, which certainly doesn’t belie top-15 status. And to be sure, the fact that I keep priors in the ratings all the way through the end of the season now (a recent change) props the Dawgs up a bit, since they were awesome for most of the run-up to 2010.
That said, a) when the Dawgs won, they dominated (average winning margin: 31.2 points), and b) postgame win expectancy says they were really unlucky to lose to Colorado and probably should have gone at least 1-4 or 2-3 against Arkansas, MSU, Florida, Auburn, and UCF.
Their second-order win total (which simply adds up the post-game win probabilities) was 8.6, which suggests they were closer to a 9-4 team than 6-7. And obviously a 9-4 Georgia team with a top-30 strength of schedule is going to be right at home in the S&P+ top 15. So it’s semi-justifiable, at least.
Miami (7-6, 13th)
I was even more curious about Randy Shannon’s last U team, being that the Canes didn’t have as sturdy a recent history to lean on as UGA. So let’s see.
- Sept. 2: beat FCS’ FAMU, 45-0 (post-game win probability: 94%)
- Sept. 11: lost to No. 2 Ohio State, 36-24 (60%)
- Sept. 23: beat No. 32 Pitt, 31-3 (94%)
- Oct. 2: beat No. 37 Clemson, 30-21 (78%)
- Oct. 9: lost to No. 17 Florida State, 45-17 (26%)
- Oct. 16: beat No. 86 Duke, 28-13 (87%)
- Oct. 23: beat No. 44 UNC, 33-10 (73%)
- Oct. 30: lost to No. 68 Virginia, 24-19 (69%)
- Nov. 6: beat No. 31 Maryland, 26-20 (86%)
- Nov. 13: beat No. 61 Georgia Tech, 35-10 (77%)
- Nov. 20: lost to No. 14 Virginia Tech, 31-17 (35%)
- Nov. 27: lost to No. 64 USF, 23-20 (82%)
- Dec. 31: lost to No. 21 Notre Dame, 33-17 (51%)
Wow. So of Miami’s six losses, four came with a post-game win expectancy higher than 50 percent. And they managed to lose two of those games by double digits despite being statistically superior. That’s hard to do.
Miami’s second-order win total for this season was 9.0, which means the Canes came up two wins short of where they should have been. But they looked like a nine-win team on paper, and they had an SOS ranking of 21st. That sounds like a top-15 team to me, I guess.
Shannon was fired after the loss to USF, but it’s interesting to think about what might or might not have happened had Miami done what stats said it should have done. Shannon probably still doesn’t last long-term, but if he gets another season, Miami probably doesn’t hire Al Golden in 2011 and ... maybe hires someone else in 2012 or 2013?
Life was good out west
As one would expect when you’ve got a year without a truly dominant team or two, conference averages mostly regressed toward the mean. The good conferences weren’t as good, and the bad weren’t as bad. But some shifts were still bigger than others. The Big 12, Big East, and MWC all saw their averages shrink by at least 2.5 points per game per team; meanwhile, most of the improvement took place in two western conferences.
- SEC (+14.8, down 0.8 points per game from 2009)
- Pac-10 (+11.5, up 4.9)
- Big 12 (+9.0, down 2.7)
- Big Ten (+7.6, up 0.1)
- ACC (+7.3, down 1.3)
- Big East (+3.9, down 2.7)
- Mountain West (-2.8, down 2.5)
- WAC (-3.4, up 3.8)
- Conference USA (-8.3, down 0.8)
- MAC (-12.8, down 1.5)
- Sun Belt (-18.2, down 1.6)
The Pac-10, in its last year before becoming the Pac-12, surged forward, with Oregon and Stanford both ending up in the top five and three other teams nailing down top-25 spots. Cal eked out a top-40 ranking despite a 5-7 record, and only one team (Wazzu) was truly bad.
The WAC improved its average by quite a bit, too, as Hawaii (29th) and Fresno State (34th) joined Boise State (seventh) in playing at a high level. The dead weight was still awful (NMSU was 120th, Utah State 107th, SJSU 106th), but no worse than the bottom of the MWC, and the WAC nearly caught the MWC before the latter took BSU and others from the former.