You can now find full, updated 2014 S&P+ rankings at Football Outsiders — overall, offense, defense. (Actually, all of the S&P+ rankings have been updated, but we’re going to keep focusing on one year at a time. It’s a long offseason, right?)
What changes have I made?
I’m including this blurb in each of these posts, so if you read it previously, feel free to scroll down to the rankings.
First, with a better chance to analyze which statistical factors are most consistent from the beginning of the season to the end, I made some slight tweaks in the weighting of each statistical factor (the short version: efficiency carries even more weight now). I also worked marginal efficiency and marginal explosiveness into the equation.
Then, I implemented the changes I made during 2018 for previous years. From each week’s rankings post:
- I changed the garbage time definition. S&P+ stops counting the major stats once the game has entered garbage time. Previously, that was when a game ceased to be within 27 points in the first quarter, 24 in the second, 21 in the third, and 16 in the fourth. Now I have expanded it: garbage time adjustments don’t begin until a game is outside of 43 points in the first quarter, 37 in the second, 27 in the third, and 21 in the fourth. That change came because of a piece I wrote about game states at Football Study Hall.
- Preseason projections will remain in the formulas all season. Fans hate this — it’s the biggest complaint I’ve heard regarding ESPN’s FPI formulas. Instinctively, I hate it, too. But here’s the thing: it makes projections more accurate. Our sample size for determining quality in a given season is tiny, and incorporating projection factors found in the preseason rankings decreases the overall error in projections. (For previous years, from before I actually released any sort of preseason projections, I found the most predictive success by keeping a layer of five-year history within the ratings. It’s a small percentage, but it’s in there.)
- To counteract this conservative change, I’m also making S&P+ more reactive to results, especially early in the season. If I’m admitting that S&P+ needs previous-year performances to make it better, I’m also going to admit that S&P+ doesn’t know everything it needs to early in a season, and it’s going to react a bit more to actual results. Basically, I’ve added a step to the the rankings process: after the rankings are determined, I go back and project previous games based on those ratings, and I adjust the ratings based on how much the ratings fit (or don’t fit) those results. The adjustment isn’t enormous, and it diminishes dramatically as the season unfolds.
One more recent change had the most impact, however: I made S&P+ more reactive to conferences as well. It’s similar to step 3: after the ratings are determined, I project previous games based on those ratings, and I track each conference’s average performance versus projection. For the top conference, I found that by the end of the season it was aiming low by two or three points per game per team. For the bottom conference, it was the reverse.
By shifting each team’s rating based on this conference average, and by increasing the weight of said adjustment as the season progresses, it adds basically improves against-the-spread performance by about one percentage point per season and cuts the average absolute error by somewhere between 0.2 and 0.3 points per game. That doesn’t seem like much, but look at the Prediction Tracker results and note how much of a difference 1% and 0.3 points per game could make to your projective ranking there. It’s pretty big.
It does, however, mean a fundamental shift in how mid-major teams are judged. Not to spoil the suspense, but look at the difference this adjustment made in some 2018 rankings:
- Fresno State: originally ninth, now 16th
- UCF: originally eighth, now 18th
- Utah State: originally 19th, now 21st
- Appalachian State: originally 11th, now 29th
It’s a pretty harsh adjustment, though it both makes the numbers better and perhaps passes the eye test a bit more. So we’re going with it.
Wait, so you’re including previous years of history in each season’s ratings? How could that possibly be right?
Back when I mentioned at the beginning of 2018 that I’d be using priors in the rankings all year, I had an interesting conversation with some readers on Twitter about what happens at the end of the year. Would I be removing the priors for the year-end ratings so that a) we’d be evaluating teams’ performance based only on what happened in that year, and b) the “recent history” prior wouldn’t then carry influence right into the next year’s projections (since they would include the priors that were included in the previous year’s rankings)?
It was a legitimate question, one to which I don’t think there’s a right answer. To me, it simply comes down to this question: when you’re looking back at previous seasons’ ratings, what are you looking for?
To me, it’s usually to get a sense of who might beat whom, right? I understand the draw of a “this year only” evaluative look, but S&P+ is intended to be a predictive measure, and I’ve decided to include these priors because they make the predictions better. In that sense, removing those priors at the end of the season makes it less of a predictive measure (even though obviously any “predictions” made of a team from 2006 in 2019 is theoretical only).
(How will this impact the preseason projections moving forward? The weights of each of the primary projection factors — recent performance, recruiting, and returning production — are always based on updated correlations, meaning that it depends on how strongly each factor predicts the next season’s performance. I’m dumping the new S&P+ ratings into the old engine, and, well, we’ll find out. Maybe the recent performance numbers end up with a lower correlation. We’ll see.)
On with the rankings.
2014 S&P+ rankings
|Team||Rec||S&P+||Rk||Off. S&P+||Rk||Def. S&P+||Rk||ST S&P+||Rk|
|Team||Rec||S&P+||Rk||Off. S&P+||Rk||Def. S&P+||Rk||ST S&P+||Rk|
|San Diego State||7-6||1.0||67||25.1||91||24.6||51||0.5||43|
|San Jose State||3-9||-11.5||105||22.1||104||30.5||86||-3.1||127|
|New Mexico State||2-10||-23.3||127||23.0||99||43.3||126||-2.9||126|
Best vs. Best At The End Of The Year
As far as the national title race goes, every season has plot twists. (Well, most do. 2018 really didn’t have all that many.) But 2014 had a lot.
- Four weeks into the season, Oklahoma and Texas A&M were both in the AP top 6 and a combined 8-0. They would go a combined 8-10 from there.
- At that same time, Ohio State was 22nd, having suffered a meek-looking, title-disqualifying loss to Virginia Tech in Week 2. (The next week, VT had lost to East Carolina, which was at that time ranked 23rd.)
- A month later, Mississippi State (unranked in the preseason) was No. 1 in the country. Ole Miss (preseason No. 18) was No. 3, having upset Alabama. Both were unbeaten. I wrote about the Magnolia State being the center of the football universe.
- Ohio State? Still 13th at this point, 10th among one-loss teams. When the inaugural CFP rankings were unveiled a week later, the Buckeyes were 16th.
- A month later, Ole Miss and Mississippi State had lost a combined five times. Auburn had been as high as third and was now 8-4 and 19th. Everything we thought we knew at the end of October had been flipped around.
- A month or so later, Ohio State was the national champion.
The year-end S&P+ rankings, by nature, look at the full-season S&P+ ratings. That typically does a pretty good job of telling us who the best teams were in a given season. But in 2014, the balance of power shifted monthly. And we end up with a list in which a three-loss Georgia ranks ahead of the national champ, an 8-5 Auburn ends up ahead of both Baylor and TCU, etc. Weird year, weird rankings.
I got yelled at a lot by FSU fans in 2014. A lot. My S&P+ rankings — the old edition, that is — phased out preseason projections pretty quickly and were very much unimpressed with the Seminoles’ ability to do just enough to win games but never enough to actually look good.
I evidently could have saved myself a lot of pain by moving to this new algorithm a lot sooner.
The updated S&P+ rankings make greater use of priors — which of course does this FSU team a lot of favors (considering how amazing the Noles were in 2013); plus, with a larger sample of data, I’ve updated the weighting of factors to make the more predictive. Evidently that did the Noles some major favors, too.
That’s not the entire story here, though. I re-simulated 2014, start to finish, and five weeks in the Noles were down to ninth. Nine weeks in, they were still just seventh. But they rose back to second by beating top-30 Miami and Florida teams, then handling No. 33 Georgia Tech by what S&P+ saw as more than a two-point margin. While everybody else but Alabama, Ohio State, Oregon, and maybe Baylor were faltering (even TCU experimented with the idea of losing to Kansas late in the year), FSU rose back to second.
Lots of teams saw their S&P+ rankings change quite a bit with my new method, but this is one of the most fascinating changes I’ve noticed. Just think of all the Twitter folks I might not have had to mute had I made this switch five years ago.
The SEC’s best year ended poorly
Eight weeks into the season, here’s where SEC teams ranked in the (new) S&P+:
1. Alabama (6-1)
2. Ole Miss (7-0)
3. Auburn (5-1)
4. Mississippi State (6-0)
5. Georgia (6-1)
8. LSU (6-2)
11. Texas A&M (5-3)
14. South Carolina (4-3)
16. Missouri (5-2)
24. Florida (3-3)
29. Kentucky (5-2)
30. Tennessee (3-4)
31. Arkansas (3-4)
66. Vanderbilt (2-5)
The entire top five, nine of the top 16, 13 of the top 31, and an otherworldly average S&P+ rating of plus-21.8. As October flipped to November, the Southeastern Conference may have been at its highest ever height.
But while Alabama mostly kept up its pace and Georgia did enough to stay up there (and Arkansas surged late in the season), Ole Miss, Auburn, and Mississippi State all faded in November. The Rebels lost their way after Laquon Treadwell’s injury and the heart-breaking loss to Auburn, and the Tigers butt-fumbled against Texas A&M and drifted into the wilderness. Then all three lost their bowl games. (Okay, Ole Miss and MSU didn’t just lose bowl games — they got smoked in them, by TCU and Georgia Tech, respectively.)
By the time the season had ended, the league had slipped a few points from its peak. And it was still at just about an all-time season-ending high.
Full-season average S&P+ ratings, 2014:
- SEC (+18.0, up 2.6 adjusted points per game per team from 2012)
- Big 12 (+9.6, down 1.0)
- Pac-12 (+9.6, down 4.4)
- ACC (+8.9, down 1.4)
- Big Ten (+8.3, down 0.7)
- AAC (-4.6, down 2.9)
- Mountain West (-5.5, up 0.4)
- Conference USA (-6.4, up 5.5)
- MAC (-9.5, up 3.3)
- Sun Belt (-10.0, up 6.1)
Ohio State couldn’t carry a weak Big Ten, and the ACC still sank from its 2013 spot despite S&P+ liking FSU a lot more than it used to. The Pac-12 stumbled from 2013 heights, and overall, the SEC was the only of the top six conferences to actually improve. It inflated, and everyone else of particular import deflated.
And then, at the end of the SEC’s best year, teams from the Big Ten and Pac-12 played for the national title. Naturally.