You can now find full, updated 2013 S&P+ rankings at Football Outsiders — overall, offense, defense. (Actually, all of the S&P+ rankings have been updated, but we’re going to keep focusing on one year at a time. It’s a long offseason, right?)
What changes have I made?
I’m including this blurb in each of these posts, so if you read it previously, feel free to scroll down to the rankings.
First, with a better chance to analyze which statistical factors are most consistent from the beginning of the season to the end, I made some slight tweaks in the weighting of each statistical factor (the short version: efficiency carries even more weight now). I also worked marginal efficiency and marginal explosiveness into the equation.
Then, I implemented the changes I made during 2018 for previous years. From each week’s rankings post:
- I changed the garbage time definition. S&P+ stops counting the major stats once the game has entered garbage time. Previously, that was when a game ceased to be within 27 points in the first quarter, 24 in the second, 21 in the third, and 16 in the fourth. Now I have expanded it: garbage time adjustments don’t begin until a game is outside of 43 points in the first quarter, 37 in the second, 27 in the third, and 21 in the fourth. That change came because of a piece I wrote about game states at Football Study Hall.
- Preseason projections will remain in the formulas all season. Fans hate this — it’s the biggest complaint I’ve heard regarding ESPN’s FPI formulas. Instinctively, I hate it, too. But here’s the thing: it makes projections more accurate. Our sample size for determining quality in a given season is tiny, and incorporating projection factors found in the preseason rankings decreases the overall error in projections. (For previous years, from before I actually released any sort of preseason projections, I found the most predictive success by keeping a layer of five-year history within the ratings. It’s a small percentage, but it’s in there.)
- To counteract this conservative change, I’m also making S&P+ more reactive to results, especially early in the season. If I’m admitting that S&P+ needs previous-year performances to make it better, I’m also going to admit that S&P+ doesn’t know everything it needs to early in a season, and it’s going to react a bit more to actual results. Basically, I’ve added a step to the the rankings process: after the rankings are determined, I go back and project previous games based on those ratings, and I adjust the ratings based on how much the ratings fit (or don’t fit) those results. The adjustment isn’t enormous, and it diminishes dramatically as the season unfolds.
One more recent change had the most impact, however: I made S&P+ more reactive to conferences as well. It’s similar to step 3: after the ratings are determined, I project previous games based on those ratings, and I track each conference’s average performance versus projection. For the top conference, I found that by the end of the season it was aiming low by two or three points per game per team. For the bottom conference, it was the reverse.
By shifting each team’s rating based on this conference average, and by increasing the weight of said adjustment as the season progresses, it adds basically improves against-the-spread performance by about one percentage point per season and cuts the average absolute error by somewhere between 0.2 and 0.3 points per game. That doesn’t seem like much, but look at the Prediction Tracker results and note how much of a difference 1% and 0.3 points per game could make to your projective ranking there. It’s pretty big.
It does, however, mean a fundamental shift in how mid-major teams are judged. Not to spoil the suspense, but look at the difference this adjustment made in some 2018 rankings:
- Fresno State: originally ninth, now 16th
- UCF: originally eighth, now 18th
- Utah State: originally 19th, now 21st
- Appalachian State: originally 11th, now 29th
It’s a pretty harsh adjustment, though it both makes the numbers better and perhaps passes the eye test a bit more. So we’re going with it.
Wait, so you’re including previous years of history in each season’s ratings? How could that possibly be right?
Back when I mentioned at the beginning of 2018 that I’d be using priors in the rankings all year, I had an interesting conversation with some readers on Twitter about what happens at the end of the year. Would I be removing the priors for the year-end ratings so that a) we’d be evaluating teams’ performance based only on what happened in that year, and b) the “recent history” prior wouldn’t then carry influence right into the next year’s projections (since they would include the priors that were included in the previous year’s rankings)?
It was a legitimate question, one to which I don’t think there’s a right answer. To me, it simply comes down to this question: when you’re looking back at previous seasons’ ratings, what are you looking for?
To me, it’s usually to get a sense of who might beat whom, right? I understand the draw of a “this year only” evaluative look, but S&P+ is intended to be a predictive measure, and I’ve decided to include these priors because they make the predictions better. In that sense, removing those priors at the end of the season makes it less of a predictive measure (even though obviously any “predictions” made of a team from 2006 in 2019 is theoretical only).
(How will this impact the preseason projections moving forward? The weights of each of the primary projection factors — recent performance, recruiting, and returning production — are always based on updated correlations, meaning that it depends on how strongly each factor predicts the next season’s performance. I’m dumping the new S&P+ ratings into the old engine, and, well, we’ll find out. Maybe the recent performance numbers end up with a lower correlation. We’ll see.)
On with the rankings.
2013 S&P+ rankings
|Team||Rec||S&P+||Rk||Off. S&P+||Rk||Def. S&P+||Rk||ST S&P+||Rk|
|Team||Rec||S&P+||Rk||Off. S&P+||Rk||Def. S&P+||Rk||ST S&P+||Rk|
|New Mexico State||2-10||-35.1||121||17.7||110||52.4||125||-0.4||84|
|San Diego State||8-5||-4.8||80||26.9||72||29.9||77||-1.8||119|
|San Jose State||6-6||-5.1||81||33.0||45||38.6||110||0.5||47|
Auburn: lucky and good
Second-order win totals suggest Auburn should have been more like a 9-5 team than 12-2 in 2013. The Tigers had just a 15 percent post-game win expectancy in the 34-28 win over Alabama, 36 percent in the 45-41 win over Texas A&M, 38 percent in the 43-38 win over Georgia, and 43 percent in the 31-24 win over Washington State. For that matter, they were only at 52 percent against Mississippi State and 69 percent against Ole Miss.
As anyone who actually watched these games can attest, it certainly took a lot of good fortune to get Auburn to the BCS title game. Of course. That said...
- Their losses were to the No. 1 (FSU) and No. 4 teams (LSU).
- Good fortune or no, they also played and beat No. 2 (Bama) and No. 7 (Georgia), plus No. 17 (A&M), No. 22 (Mizzou), No. 32 (Ole Miss), and No. 41 (Mississippi State). They were first in Strength of Schedule, and the average top-five team would have enjoyed only about a 0.790 win percentage against their schedule (about an 11-3 record over 14 games). They had to do well just to get to that 9-win second-order total. (The next five teams in SOS went a combined 20-41.)
- They improved 49 spots, from 55th to sixth, in Gus Malzahn’s first season.
- The extra magic certainly made this team memorable, but Auburn traded blows with a bunch of awesome teams ... and spent a consideration portion of good karma in the process. The next year, they faced an even harder schedule (expected win percentage for an average top-five team: 0.755), again ranked sixth in S&P+, and went just 8-5. The next year, they faced the No. 2 schedule, ranked 23rd, and went 7-6. The next year: No. 2 schedule, 21st, 8-5.
But hey, if you’re going to only get so many good breaks, spend ‘em all in chunks. Auburn’s done that better than anybody (not that this is actually something you can control, but go with it) — the Tigers were 12-0 in one-possession games during the 2010 and 2013 regular seasons, earning two national title game bids and one title. The rest of the decade, they’re just 14-16 in such games.
Everything came together in Tallahassee
S&P+ ratings are determined by slapping your team’s rating onto the scoring curve for the given season. Since games used to be lower-scoring, that means the top-end of the S&P+ ratings will be lower — the best team from the 1930s, 1932 USC, graded out at only a plus-24.8 rating, but that was the 99.8th percentile, higher than the highest team of the 2010s.
The scoring curve for 2013 was a bit more spread out than in other years — the good teams were better, the bad teams worse, and everything was more high-scoring — so Florida State’s plus-38.5 rating only garnered a percentile rating of 98.7. That’s only 13th-highest going back to 2005 (eighth-highest among non-Saban teams). That said, in a pretty spread-out year, FSU still separated itself from the field.
Jameis Winston has over time turned out to be a pretty hard dude to like, but keeping matters on the field for now, there’s no question that he was a game-changer for Jimbo Fisher and FSU. And his performance has stood out even more considering what’s happened since he left.
Here’s a list of all Jimbo’s QB recruits at FSU (as HC)— Look! A Hale tweet! (@ADavidHaleJoint) March 10, 2019
Record vs P5 as FSU starter:
TD:INT vs P5:
8 of 11 didn’t finish at FSU; 6 never started a game.
Since Jimbo left, Taggart has signed zero QBs. pic.twitter.com/Hsy8C7yXwf
FSU ranked third in Off. S&P+ in 2013, then ranked fourth in 2014. The Noles have enjoyed only one other top-10 ranking this decade, which, considering the recruiting involved, is a bit of an underachievement.
That said, Winston wasn’t the only difference-making newcomer to the 2013 squad. This was also Jeremy Pruitt’s only season in Tallahassee, and he made a massive difference. FSU has ranked in the Def. S&P+ top 10 five times this decade, but this was the Noles’ only time in the top two. Everything came together perfectly for this dominant team.
Rise of the Pac-12
The SEC was just about as good as ever in 2013, though it was a hair top-heavy — it boasted four of the top seven teams in the country but only four others in the top 40. (The shaaaaaame...)
That said, there was a new challenger to the Best Average S&P+ throne this year: the crazy-deep Pac-12.
- SEC (+15.4, down 0.3 adjusted points per game per team from 2012)
- Pac-12 (+14.0, up 4.5)
- Big 12 (+10.6, down 2.9)
- ACC (+10.3, up 4.8)
- Big Ten (+9.0, up 0.9)
- AAC (-1.7, down 3.9 from the Big East’s 2012 average)
- Mountain West (-5.9, down 0.2)
- Conference USA (-11.9, down 3.8)
- MAC (-12.8, down 2.2)
- Sun Belt (-16.1, down 6.4)
The Pac-12 boasted the No. 3 (Oregon), No. 10 (USC), No. 13 (Stanford), No. 15 (Washington), and No. 18 (UCLA) teams, but perhaps an even bigger reason for the conference’s rise is that the bottom half of the conference somewhat got its act together. Colorado improved by 41 spots to 71st, Washington State by 29 spots to 50th, Utah by 23 spots to 35th, and Arizona by 17 spots to 27th. Granted, Oregon State slid and Cal collapsed, but the average team rating still improved dramatically.
(The ACC’s average rose by an even larger amount, though the depth of that improvement was probably not as impressive — it mostly came from three schools: FSU, Miami, and Maryland, which rose 48 places in Randy Edsall’s third season.)
It’s amazing what happens when you make good hires, huh? UCLA’s Jim Mora, Washington State’s Mike Leach, Arizona’s Rich Rodriguez, Arizona State’s Todd Graham were all in their respective second seasons, and Stanford’s David Shaw was in his third. Oregon didn’t miss a beat in Mark Helfrich’s first season replacing Chip Kelly. Washington was about to lose the underrated (at Washington, anyway) Steve Sarkisian to USC but would replace him with Boise State’s Chris Petersen.
Your conference improves when you hire coaches who are better than the ones they’re replacing, and the Pac-12 had something going for a while there.
Now? Less so. Helfrich couldn’t keep things going once he had to start making his own defensive hires, Mora lost steam, and Rodriguez, Graham, and Sarkisian were all replaced by guys who are, at best, their equals. Oregon State is lost, Colorado’s starting over again, etc. But this league really did have its act together in the not-so-distant past. Wouldn’t take that much for it to happen again.