You can now find full, updated 2008 S&P+ rankings at Football Outsiders — overall, offense, defense. (Actually, all of the S&P+ rankings have been updated, but we’re going to keep focusing on one year at a time. It’s a long offseason, right?)
What changes have I made?
I’m including this blurb in each of these posts, so if you read it previously, feel free to scroll down to the rankings.
First, with a better chance to analyze which statistical factors are most consistent from the beginning of the season to the end, I made some slight tweaks in the weighting of each statistical factor (the short version: efficiency carries even more weight now). I also worked marginal efficiency and marginal explosiveness into the equation.
Then, I implemented the changes I made during 2018 for previous years. From each week’s rankings post:
- I changed the garbage time definition. S&P+ stops counting the major stats once the game has entered garbage time. Previously, that was when a game ceased to be within 27 points in the first quarter, 24 in the second, 21 in the third, and 16 in the fourth. Now I have expanded it: garbage time adjustments don’t begin until a game is outside of 43 points in the first quarter, 37 in the second, 27 in the third, and 21 in the fourth. That change came because of a piece I wrote about game states at Football Study Hall.
- Preseason projections will remain in the formulas all season. Fans hate this — it’s the biggest complaint I’ve heard regarding ESPN’s FPI formulas. Instinctively, I hate it, too. But here’s the thing: it makes projections more accurate. Our sample size for determining quality in a given season is tiny, and incorporating projection factors found in the preseason rankings decreases the overall error in projections. (For previous years, from before I actually released any sort of preseason projections, I found the most predictive success by keeping a layer of five-year history within the ratings. It’s a small percentage, but it’s in there.)
- To counteract this conservative change, I’m also making S&P+ more reactive to results, especially early in the season. If I’m admitting that S&P+ needs previous-year performances to make it better, I’m also going to admit that S&P+ doesn’t know everything it needs to early in a season, and it’s going to react a bit more to actual results. Basically, I’ve added a step to the the rankings process: after the rankings are determined, I go back and project previous games based on those ratings, and I adjust the ratings based on how much the ratings fit (or don’t fit) those results. The adjustment isn’t enormous, and it diminishes dramatically as the season unfolds.
One more recent change had the most impact, however: I made S&P+ more reactive to conferences as well. It’s similar to step 3: after the ratings are determined, I project previous games based on those ratings, and I track each conference’s average performance versus projection. For the top conference, I found that by the end of the season it was aiming low by two or three points per game per team. For the bottom conference, it was the reverse.
By shifting each team’s rating based on this conference average, and by increasing the weight of said adjustment as the season progresses, it adds basically improves against-the-spread performance by about one percentage point per season and cuts the average absolute error by somewhere between 0.2 and 0.3 points per game. That doesn’t seem like much, but look at the Prediction Tracker results and note how much of a difference 1% and 0.3 points per game could make to your projective ranking there. It’s pretty big.
It does, however, mean a fundamental shift in how mid-major teams are judged. Not to spoil the suspense, but look at the difference this adjustment made in some 2018 rankings:
- Fresno State: originally ninth, now 16th
- UCF: originally eighth, now 18th
- Utah State: originally 19th, now 21st
- Appalachian State: originally 11th, now 29th
It’s a pretty harsh adjustment, though it both makes the numbers better and perhaps passes the eye test a bit more. So we’re going with it.
Wait, so you’re including previous years of history in each season’s ratings? How could that possibly be right?
Back when I mentioned at the beginning of 2018 that I’d be using priors in the rankings all year, I had an interesting conversation with some readers on Twitter about what happens at the end of the year. Would I be removing the priors for the year-end ratings so that a) we’d be evaluating teams’ performance based only on what happened in that year, and b) the “recent history” prior wouldn’t then carry influence right into the next year’s projections (since they would include the priors that were included in the previous year’s rankings)?
It was a legitimate question, one to which I don’t think there’s a right answer. To me, it simply comes down to this question: when you’re looking back at previous seasons’ ratings, what are you looking for?
To me, it’s usually to get a sense of who might beat whom, right? I understand the draw of a “this year only” evaluative look, but S&P+ is intended to be a predictive measure, and I’ve decided to include these priors because they make the predictions better. In that sense, removing those priors at the end of the season makes it less of a predictive measure (even though obviously any “predictions” made of a team from 2006 in 2019 is theoretical only).
(How will this impact the preseason projections moving forward? The weights of each of the primary projection factors — recent performance, recruiting, and returning production — are always based on updated correlations, meaning that it depends on how strongly each factor predicts the next season’s performance. I’m dumping the new S&P+ ratings into the old engine, and, well, we’ll find out. Maybe the recent performance numbers end up with a lower correlation. We’ll see.)
On with the rankings.
2008 S&P+ rankings
|San Jose State||6-6||-13.2||100||12.9||116||25.4||63||-0.7||83|
|San Diego State||3-9||-17.9||110||19.3||96||36.7||110||-0.5||76|
|New Mexico State||3-9||-21.7||115||17.1||108||36.9||111||-1.9||110|
The almost-dynasty, continued
Under Pete Carroll, USC shared the 2003 national title with LSU and won it outright in 2004. Even though we’re supposed to pretend some of those now-vacated wins didn’t happen, we saw them — they happened. And they kept happening for a few more seasons. And while 1.5 titles will forever be impressive, it was close to so much more.
USC ranked second in S&P+ in 2005, second in 2006, first in 2007, and, [spoilers], first in 2008. They were the best or second-best team in the country every year for six straight seasons. But they managed to lose by a combined six points at Oregon State and UCLA in 2006, by a combined eight to Stanford and Oregon in 2007, and by six at Oregon State in 2008. Erase two or three of those losses, and you’ve got a three-, four-, or maybe five-time national champion.
You could make the case that Pete Carroll’s 2008 USC squad, his last truly great one, might have been his greatest one. This was a vengeful team — the Trojans beat Penn State (fifth in S&P+) by 14, Ohio State (11th) by 32, and Oregon (15th) by 34. They won only one game by single digits, and it was on the road against a good Arizona team (22nd).
For the second time in three years, a trip to Corvallis quite possibly kept USC out of the BCS Championships. (It might have in 2006, it definitely did in 2008.) The Rodgers brothers and their orange-clad cohorts might have almost by themselves turned a potential four-time national champ into a two-time champ.
Peak Big 12
From my 2008 advanced box scores piece:
The Big 12 experienced a perfect storm of innovation and quarterback experience, with OU’s Sam Bradford winning the Heisman and nearly every team starting either a senior (Mizzou’s Chase Daniel, Texas Tech’s Graham Harrell, Nebraska’s Joe Ganz) or junior quarterback (Texas’ Colt McCoy, OSU’s Zac Robinson, Kansas’ Todd Reesing, Kansas State’s Josh Freeman). Big 12 teams nabbed seven of the nine spots in Off. S&P+, plus an eighth in the top 20, and Baylor, with a freshman named Robert Griffin III, surged to 33rd.
The conference-level adjustments I added to S&P+ did very, very happy things to the Big 12’s 2008 S&P+ ratings. Oklahoma State, Texas Tech, Kansas State, Colorado, and Iowa State all saw their rankings rise by at least 10 spots, and not only were there seven league teams in the Off. S&P+ top 10, there were five in the overall top 10.
I’ve maintained for a while that Missouri’s 2008 team, which went 10-4, was quite possibly/likely better than the 2007 edition that went 12-2 and finished fourth in the AP poll. The numbers back me up. But while the Tigers improved a little, much of the rest of the conference improved a lot. OSU was suddenly awesome, and unlike in 2007, Missouri had to play Texas. That made quite a difference.
By the next year, Mizzou had lost Daniel and Jeremy Maclin, Tech had lost Harrell and Michael Crabtree, OU had lost Bradford to injury, etc. But 2008 was indeed a perfect convergence of innovation and experience, and if a 4-team CFP had been in place in this season, the conference almost certainly would have had two teams in it. (It’s possible we’d have had an OU-Texas rematch in the semifinals, too. That wouldn’t have sucked.)
- Four of the top S&P+ ratings from 2005-18 were all produced this season. OU’s plus-32.4 rating would have ranked first in 2007; it was fourth in 2008. Hell, No. 5 Penn State might’ve been better than any 2010 team...
- Post-game win expectancy suggested that Tennessee was a top-30 team and should have won about 7.3 games. Instead, the Vols won only five, going 1-3 in one-possession games (they lost 27-24 to UCLA, 14-12 to Auburn, and 13-7 to, gulp, Wyoming), and ended up firing Phil Fulmer, briefly hiring Lane Kiffin, and heading off into the wilderness for a decade. A 7-5 season wouldn’t have made Vols fans particularly happy, either, but do they still dump Fulmer for that? Interesting butterfly effect there.
- If Tennessee was the least fortunate team in 2008, Virginia Tech might have been the most fortunate. The Hokies should have won about 6.7 games, per post-game win expectancy; instead, they went 10-4 and won both a weak-as-hell ACC and the Orange Bowl (over Cincinnati) despite a No. 37 ranking.
- Buffalo, meanwhile, had the statistical profile of a five-win team but instead won eight games and a MAC title. Better to be lucky than good and whatnot. But the fact that Turner Gill ended up getting the Kansas job a year later, in part because of this statistically unlikely run, make that hire look as or more questionable than it already did.
- No, my using pictures of USC and (new USC offensive coordinator) Graham Harrell above was not a coincidence.