You can now find full, updated 2011 S&P+ rankings at Football Outsiders — overall, offense, defense. (Actually, all of the S&P+ rankings have been updated, but we’re going to keep focusing on one year at a time. It’s a long offseason, right?)
What changes have I made?
I’m including this blurb in each of these posts, so if you read it previously, feel free to scroll down to the rankings.
First, with a better chance to analyze which statistical factors are most consistent from the beginning of the season to the end, I made some slight tweaks in the weighting of each statistical factor (the short version: efficiency carries even more weight now). I also worked marginal efficiency and marginal explosiveness into the equation.
Then, I implemented the changes I made during 2018 for previous years. From each week’s rankings post:
- I changed the garbage time definition. S&P+ stops counting the major stats once the game has entered garbage time. Previously, that was when a game ceased to be within 27 points in the first quarter, 24 in the second, 21 in the third, and 16 in the fourth. Now I have expanded it: garbage time adjustments don’t begin until a game is outside of 43 points in the first quarter, 37 in the second, 27 in the third, and 21 in the fourth. That change came because of a piece I wrote about game states at Football Study Hall.
- Preseason projections will remain in the formulas all season. Fans hate this — it’s the biggest complaint I’ve heard regarding ESPN’s FPI formulas. Instinctively, I hate it, too. But here’s the thing: it makes projections more accurate. Our sample size for determining quality in a given season is tiny, and incorporating projection factors found in the preseason rankings decreases the overall error in projections. (For previous years, from before I actually released any sort of preseason projections, I found the most predictive success by keeping a layer of five-year history within the ratings. It’s a small percentage, but it’s in there.)
- To counteract this conservative change, I’m also making S&P+ more reactive to results, especially early in the season. If I’m admitting that S&P+ needs previous-year performances to make it better, I’m also going to admit that S&P+ doesn’t know everything it needs to early in a season, and it’s going to react a bit more to actual results. Basically, I’ve added a step to the the rankings process: after the rankings are determined, I go back and project previous games based on those ratings, and I adjust the ratings based on how much the ratings fit (or don’t fit) those results. The adjustment isn’t enormous, and it diminishes dramatically as the season unfolds.
One more recent change had the most impact, however: I made S&P+ more reactive to conferences as well. It’s similar to step 3: after the ratings are determined, I project previous games based on those ratings, and I track each conference’s average performance versus projection. For the top conference, I found that by the end of the season it was aiming low by two or three points per game per team. For the bottom conference, it was the reverse.
By shifting each team’s rating based on this conference average, and by increasing the weight of said adjustment as the season progresses, it adds basically improves against-the-spread performance by about one percentage point per season and cuts the average absolute error by somewhere between 0.2 and 0.3 points per game. That doesn’t seem like much, but look at the Prediction Tracker results and note how much of a difference 1% and 0.3 points per game could make to your projective ranking there. It’s pretty big.
It does, however, mean a fundamental shift in how mid-major teams are judged. Not to spoil the suspense, but look at the difference this adjustment made in some 2018 rankings:
- Fresno State: originally ninth, now 16th
- UCF: originally eighth, now 18th
- Utah State: originally 19th, now 21st
- Appalachian State: originally 11th, now 29th
It’s a pretty harsh adjustment, though it both makes the numbers better and perhaps passes the eye test a bit more. So we’re going with it.
Wait, so you’re including previous years of history in each season’s ratings? How could that possibly be right?
Back when I mentioned at the beginning of 2018 that I’d be using priors in the rankings all year, I had an interesting conversation with some readers on Twitter about what happens at the end of the year. Would I be removing the priors for the year-end ratings so that a) we’d be evaluating teams’ performance based only on what happened in that year, and b) the “recent history” prior wouldn’t then carry influence right into the next year’s projections (since they would include the priors that were included in the previous year’s rankings)?
It was a legitimate question, one to which I don’t think there’s a right answer. To me, it simply comes down to this question: when you’re looking back at previous seasons’ ratings, what are you looking for?
To me, it’s usually to get a sense of who might beat whom, right? I understand the draw of a “this year only” evaluative look, but S&P+ is intended to be a predictive measure, and I’ve decided to include these priors because they make the predictions better. In that sense, removing those priors at the end of the season makes it less of a predictive measure (even though obviously any “predictions” made of a team from 2006 in 2019 is theoretical only).
(How will this impact the preseason projections moving forward? The weights of each of the primary projection factors — recent performance, recruiting, and returning production — are always based on updated correlations, meaning that it depends on how strongly each factor predicts the next season’s performance. I’m dumping the new S&P+ ratings into the old engine, and, well, we’ll find out. Maybe the recent performance numbers end up with a lower correlation. We’ll see.)
On with the rankings.
2011 S&P+ rankings
|San Diego State||8-5||-1.5||71||29.9||50||30.2||75||-1.3||105|
|San Jose State||5-7||-11.9||95||23.5||87||35.2||97||-0.2||64|
|New Mexico State||4-9||-18.7||108||23.3||89||39.4||116||-2.5||116|
Year of the (SEC) defense
Here are the top 10 defenses of 2005-18, per S&P+:
- 2006 Virginia Tech (4.6 adjusted points per game)
- 2017 Alabama (5.6)
- 2011 Alabama (6.4)
- 2009 Alabama (6.5)
- 2012 Florida (6.6)
- 2006 LSU (6.8)
- 2008 Tennessee (6.8)
- 2005 Miami (7.0)
- 2008 USC (7.2)
- 2011 LSU (7.7)
Technically, the 2011 Bama and LSU defenses were not the best of either the Nick Saban era at Alabama (2017 graded out better) or the Les Miles era at LSU (2006), but the way this season played out, with LSU and Bama playing in the Game of the Century in early-November, then taking on a rematch in the BCS Championship, they defined this season like no other Ds on the above list could.
And I mean ... my goodness ... the amount of talent on these two units was positively absurd. Alabama had guys like Courtney Upshaw, Dont’a Hightower, Mark Barron, C.J. Mosley, Dre Kirkpatrick, Nico Johnson, Dee Milliner, Ed Stinson, and Damion Square. LSU had Tyrann Mathieu, Morris Claiborne, Eric Reid, Barkevious Mingo, Sam Montgomery, Michael Brockers, Tharold Simon, Bennie Logan, and Kevin Minter. And I write this paragraph knowing that I forgot at least three or four total studs above.
(Poor Georgia picked the wrong year to dominate defensively. The Dawgs had Jarvis Jones, Brandon Boykin, Bacarri Rambo, etc., and ranked third in Def. S&P+ ... quite a few points behind this top two.)
Neither Alabama nor LSU had horrible offenses, by the way — Bama ranked 13th in Off. S&P+, LSU ranked 18th, and the two teams scored at least 34 points 20 times in 27 combined games (albeit with help from some return scores here and there). But in eight quarters and an overtime period against each other, the teams combined for ... one touchdown. LSU’s 9-6 win in Tuscaloosa got painted with the “BAD OFFENSES” brush, but those offenses were only bad against those defenses.
The most visible change from the old S&P+ formula to the new comes pretty close to the top, where Oklahoma State now sits at No. 4, behind the No. 3 Oklahoma Sooners ... which lost to OSU by 34 points late in the season. I’m sure that doesn’t look strange at all.
If nothing else, this serves as a reminder of how 2011 actually played out. OU started the season by walloping Tulsa (57th in S&P+), beating Florida State (10th) and Missouri (23rd) by double digits, and crushing Texas (16th), 55-17. OSU was awesome, too, but in a week-to-week progression, OU would have been ahead of the Pokes by a decent margin at the midway point of the season. So even though OSU was far superior in the second half of the year, they had some ground to make up.
I guess. The optics here aren’t great, but oh well.
Winning in football vs. winning in realignment
From 2009 to 2010, only two conferences moved up or down by more than three adjusted points per game per team — the Pac-10 (up 4.9) and the WAC (up 3.8). With the way that I more heavily use priors, that would theoretically tamp down on extreme movement.
It didn’t tamp down on 2011 movement, however. SIX conferences — more than half — moved up or down by at least three points.
- SEC (+13.3, down 1.5 points per game from 2010)
- Big 12 (+12.4, up 3.4)
- Big Ten (+9.0, up 1.4)
- Pac-12 (+7.5, down 4.0)
- Big East (+6.1, up 2.2)
- ACC (+3.9, down 3.4)
- Conference USA (-5.0, up 3.3)
- Mountain West (-5.3, down 2.5)
- WAC (-7.7, down 4.3)
- MAC (-7.7, up 5.1)
- Sun Belt (-16.1, up 2.1)
The strength of schedule rankings (which you can find at FO) for this season are pretty funny. The SEC had the Nos. 1-2, 4-6, 11-12, and 16 schedules, while the Big 12 had Nos. 3, 7-10, 13-15, and 17. Arizona came in at 18th, tops in the non-SEC/B12 universe.
Of course, “strength” was measured in a lot of different ways in 2011.
In September 2011, the ACC announced it was adding Pitt and Syracuse. Combined with the Big East losing WVU to the Big 12 and missing out on the potential Boise State and TCU additions it had until recently arranged to make, it was pretty clear that the Big East was conference realignment’s biggest loser. This was a shame considering that, in terms of on-field superiority, it had more than held its own with the ACC in recent years, even after losing programs like Virginia Tech and Miami earlier in the 2000s.
Despite neither Pitt nor Syracuse doing much, the Big East outplayed the ACC in this death knell season as well. And in its first year as the Pac-12, our westernmost power conference sank considerably. Realignment winners and on-field winners were very much not the same thing, at least not immediately.
Meanwhile, LOOK AT THE MAC, not only putting on an incredible string of MACtion games in November, but also improving its product drastically.