You can now find full, updated 2009 S&P+ rankings at Football Outsiders — overall, offense, defense. (Actually, all of the S&P+ rankings have been updated, but we’re going to keep focusing on one year at a time. It’s a long offseason, right?)
What changes have I made?
I’m including this blurb in each of these posts, so if you read it previously, feel free to scroll down to the rankings.
First, with a better chance to analyze which statistical factors are most consistent from the beginning of the season to the end, I made some slight tweaks in the weighting of each statistical factor (the short version: efficiency carries even more weight now). I also worked marginal efficiency and marginal explosiveness into the equation.
Then, I implemented the changes I made during 2018 for previous years. From each week’s rankings post:
- I changed the garbage time definition. S&P+ stops counting the major stats once the game has entered garbage time. Previously, that was when a game ceased to be within 27 points in the first quarter, 24 in the second, 21 in the third, and 16 in the fourth. Now I have expanded it: garbage time adjustments don’t begin until a game is outside of 43 points in the first quarter, 37 in the second, 27 in the third, and 21 in the fourth. That change came because of a piece I wrote about game states at Football Study Hall.
- Preseason projections will remain in the formulas all season. Fans hate this — it’s the biggest complaint I’ve heard regarding ESPN’s FPI formulas. Instinctively, I hate it, too. But here’s the thing: it makes projections more accurate. Our sample size for determining quality in a given season is tiny, and incorporating projection factors found in the preseason rankings decreases the overall error in projections. (For previous years, from before I actually released any sort of preseason projections, I found the most predictive success by keeping a layer of five-year history within the ratings. It’s a small percentage, but it’s in there.)
- To counteract this conservative change, I’m also making S&P+ more reactive to results, especially early in the season. If I’m admitting that S&P+ needs previous-year performances to make it better, I’m also going to admit that S&P+ doesn’t know everything it needs to early in a season, and it’s going to react a bit more to actual results. Basically, I’ve added a step to the the rankings process: after the rankings are determined, I go back and project previous games based on those ratings, and I adjust the ratings based on how much the ratings fit (or don’t fit) those results. The adjustment isn’t enormous, and it diminishes dramatically as the season unfolds.
One more recent change had the most impact, however: I made S&P+ more reactive to conferences as well. It’s similar to step 3: after the ratings are determined, I project previous games based on those ratings, and I track each conference’s average performance versus projection. For the top conference, I found that by the end of the season it was aiming low by two or three points per game per team. For the bottom conference, it was the reverse.
By shifting each team’s rating based on this conference average, and by increasing the weight of said adjustment as the season progresses, it adds basically improves against-the-spread performance by about one percentage point per season and cuts the average absolute error by somewhere between 0.2 and 0.3 points per game. That doesn’t seem like much, but look at the Prediction Tracker results and note how much of a difference 1% and 0.3 points per game could make to your projective ranking there. It’s pretty big.
It does, however, mean a fundamental shift in how mid-major teams are judged. Not to spoil the suspense, but look at the difference this adjustment made in some 2018 rankings:
- Fresno State: originally ninth, now 16th
- UCF: originally eighth, now 18th
- Utah State: originally 19th, now 21st
- Appalachian State: originally 11th, now 29th
It’s a pretty harsh adjustment, though it both makes the numbers better and perhaps passes the eye test a bit more. So we’re going with it.
Wait, so you’re including previous years of history in each season’s ratings? How could that possibly be right?
Back when I mentioned at the beginning of 2018 that I’d be using priors in the rankings all year, I had an interesting conversation with some readers on Twitter about what happens at the end of the year. Would I be removing the priors for the year-end ratings so that a) we’d be evaluating teams’ performance based only on what happened in that year, and b) the “recent history” prior wouldn’t then carry influence right into the next year’s projections (since they would include the priors that were included in the previous year’s rankings)?
It was a legitimate question, one to which I don’t think there’s a right answer. To me, it simply comes down to this question: when you’re looking back at previous seasons’ ratings, what are you looking for?
To me, it’s usually to get a sense of who might beat whom, right? I understand the draw of a “this year only” evaluative look, but S&P+ is intended to be a predictive measure, and I’ve decided to include these priors because they make the predictions better. In that sense, removing those priors at the end of the season makes it less of a predictive measure (even though obviously any “predictions” made of a team from 2006 in 2019 is theoretical only).
(How will this impact the preseason projections moving forward? The weights of each of the primary projection factors — recent performance, recruiting, and returning production — are always based on updated correlations, meaning that it depends on how strongly each factor predicts the next season’s performance. I’m dumping the new S&P+ ratings into the old engine, and, well, we’ll find out. Maybe the recent performance numbers end up with a lower correlation. We’ll see.)
On with the rankings.
2009 S&P+ rankings
|San Diego State||4-8||-11.0||95||19.5||101||28.7||78||-1.8||111|
|San Jose State||2-10||-25.2||115||13.5||115||37.7||112||-1.1||97|
|New Mexico State||4-9||-28.5||118||6.2||120||33.0||95||-1.6||106|
Defenses caught up
I like it when my theories are proven right. I’ve long thought about the 2009 season as one in which the defenses made up ground. The 2008 season was a boom time for offenses in college football, but a lot of the main quarterbacks from that 2007-08 run, spread and not spread — Missouri’s Chase Daniel, USC’s Mark Sanchez, Texas Tech’s Graham Harrell, Georgia’s Matt Stafford — were gone, and others like OU’s Sam Bradford and OSU’s Zac Robinson got hurt and missed some or all of 2009. And of course, Texas’ Colt McCoy got hurt early in the final game of the season*. (Plus, it seemed like half of the season’s damn games were played in rain and slop.)
Add that turnover to the fact that a) defenses began to adapt somewhat to the spread that had dominated in 2007-08, b) Nick Saban’s Alabama defense became fully weaponized, and c) Nebraska’s Ndamukong Suh went to a level we haven’t seen from a college football defender since, and you had a recipe for a defensive surge.
- In 2008, you had 11 teams with an Off. S&P+ rating of 40 adjusted points per game; you had one in 2009 (Cincinnati).
- In 2008, you had only two teams with a Def. S&P+ rating of under 10 adjusted points per game; in 2009, you had seven.
- In 2008, there was a 45-point spread between the best and worst offenses and a 42.1-point spread between the best and worst defenses — there was more dominance to be found on O than D.
- In 2009, there was a 38.8-point difference between best and worst defense and a 34.6-point difference between best and worst offense. Suddenly, defense was the way to find more advantages.
- Rice’s 2008 offense, powered by an otherworldly season from quarterback Chase Clement (4,119 passing yards, 774 non-sack rushing yards), ranked 11th in Off. S&P+ with a 40.5 rating. That would have finished just 0.3 points behind Cincinnati for No. 1 in 2009.
Meanwhile, the No. 1 overall team in 2009 had an S&P+ rating of plus-32.8. That would have ranked fourth in 2008.
* I removed bowl performances from the S&P+ equation out of curiosity, and without the national title game, Texas is ahead of Bama by 1.3 points — 31.4 to 30.1. Florida’s still No. 1, though.
2009 was the most miserable 13-1 year possible for Florida fans. It might be worth a writeup sometime. It began in Week 3 with not beating Lane Kiffin by enough (23-13) and there was always something else every week. I've never seen anything like it for any program.— David Wunderlich (@Year2) February 13, 2019
Expectations are a burden sometimes. Florida had won the 2008 national title and returned Tim Tebow and plenty of other awesome weapons — Riley Cooper, Aaron Hernandez, etc. But even though I had only begun writing nationally for Football Outsiders at the time and was still pretty Big 12-centric, it wasn’t hard to realize that ... the fun was gone. It was more about duty and expectation than enjoyable football that year.
There was no way Florida was going to live up to unending hype, and even though the Gators were, per S&P+, the best team in the country for the season as a whole, things almost felt disappointing. And that was before the Gator defense got utterly dominated by Alabama in the SEC title game. I can see why Urban Meyer burned out this season. (He retired briefly, then reconsidered, then stepped away after the 2010 season.)
A CFP selection this year would have been very interesting, by the way. There were four unbeaten teams ranked above the Gators in the AP poll at the end of the regular season — Alabama, Texas, Cincinnati, and TCU — and my assumption is that they’d have been in. But you can’t deny that the committee would have probably thought long and hard about putting in a one-loss Florida over a mid-major TCU, yeah? That would have been infuriating, and I’m thinking there are decent odds that it happens.
End of the pre-realignment era
In mid-December 2009, Wisconsin’s Barry Alvarez let it slip that the Big Ten was considering expanding its 11-team roster to get to 12 teams (at least) and a conference title game. That set off a chain of events that turned everyone in the Big 12 against each other and resulted in not only the Big Ten adding Nebraska, but also the Pac-10 attempting to loot half the Big 12 as well. (It ended up with Colorado and the MWC’s Utah.)
Things stayed mostly intact in 2010 before Nebraska, Colorado, and others officially said goodbye for the 2011 season. Still, it’s interesting to step back and look at the conference hierarchy at the time before Alvarez’s statements.
- SEC (+15.6)
- Big 12 (+11.7)
- ACC (+8.6)
- Big Ten (+7.5)
- Big East (+6.6)
- Pac-10 (+6.6)
- Mountain West (-0.3)
- WAC (-7.2)
- Conference USA (-7.5)
- MAC (-11.3)
- Sun Belt (-16.6)
The SEC was easily the No. 1 conference in 2009, but the Big 12 was easily No. 2. Colorado struggled, but the other three programs the conference would soon lose — Nebraska, Missouri, and Texas A&M — were all ranked 43rd or better in this season and 28th or better in 2010.
You, uh, can also see why the Pac-10 wanted to try to improve its lot in life. Oregon and Stanford were on the rise, but USC slipped in 2009, and the conference had only two teams ranked better than 32nd. The Big East — still a thing at this point — had three, including the two teams that played in one of the most amazing games I’ve ever seen.
- Notre Dame had the No. 3 overall offense! And went 6-6! Jimmy Clausen obviously struggled in the pros, but we maybe forget what an incredible season he had in 2009: 68 percent completion rate, 3,722 passing yards, 28 TDs, 4 INTs. (Having Golden Tate helped just a tad. Tate caught 93 passes for 1,496 yards and won the Biletnikoff*.) Unfortunately, the Irish defense ranked 77th, and the schedule was tough enough to feature 10 games decided by one possession. Notre Dame won only four of them.
- Indeed, Stanford was rising quickly. From 78th in 2007, Jim Harbaugh’s first season, the Cardinal had moved to 60th in 2008, then jumped even further to 35th in 2009. The next season: fourth.
- Your top 10 mid-majors (not including the Big East, which was a power conference at the time): No. 5 TCU, No. 14 Boise State, No. 26 Utah, No. 30 BYU, No. 49 Air Force, No. 52 Fresno State, No. 53 Houston, No. 54 CMU (Dan LeFevour!), No. 58 Southern Miss, and No. 63 Nevada. The best of the MWC and WAC (to which Boise State, Fresno State, and Nevada still belonged) could have combined to make a P5-level conference that year! But as the WAC programs came aboard, TCU and Utah jumped to the P5, and BYU went independent. The stars were never quite aligned.
* I will forever maintain that Missouri’s Danario Alexander was screwed. As good as Tate was, Alexander outdid him — 113 catches, 1,781 yards, 14 TDs — and on an offense with fewer weapons. Everyone knew Danario was going to get the ball, and he still dominated. And I mean, dominated. The most impressive sustained performance I’ve ever seen in person. He caught 83 passes at 16.8 yards per catch, with 10 TDs, over three seasons in the pros, too, but his knees basically disintegrated. RIP, Danario’s knees.