You can now find full, updated 2015 S&P+ rankings at Football Outsiders — overall, offense, defense. (Actually, all of the S&P+ rankings have been updated, but we’re going to keep focusing on one year at a time. It’s a long offseason, right?)
What changes have I made?
I’m including this blurb in each of these posts, so if you read it previously, feel free to scroll down to the rankings.
First, with a better chance to analyze which statistical factors are most consistent from the beginning of the season to the end, I made some slight tweaks in the weighting of each statistical factor (the short version: efficiency carries even more weight now). I also worked marginal efficiency and marginal explosiveness into the equation.
Then, I implemented the changes I made during 2018 for previous years. From each week’s rankings post:
- I changed the garbage time definition. S&P+ stops counting the major stats once the game has entered garbage time. Previously, that was when a game ceased to be within 27 points in the first quarter, 24 in the second, 21 in the third, and 16 in the fourth. Now I have expanded it: garbage time adjustments don’t begin until a game is outside of 43 points in the first quarter, 37 in the second, 27 in the third, and 21 in the fourth. That change came because of a piece I wrote about game states at Football Study Hall.
- Preseason projections will remain in the formulas all season. Fans hate this — it’s the biggest complaint I’ve heard regarding ESPN’s FPI formulas. Instinctively, I hate it, too. But here’s the thing: it makes projections more accurate. Our sample size for determining quality in a given season is tiny, and incorporating projection factors found in the preseason rankings decreases the overall error in projections. (For previous years, from before I actually released any sort of preseason projections, I found the most predictive success by keeping a layer of five-year history within the ratings. It’s a small percentage, but it’s in there.)
- To counteract this conservative change, I’m also making S&P+ more reactive to results, especially early in the season. If I’m admitting that S&P+ needs previous-year performances to make it better, I’m also going to admit that S&P+ doesn’t know everything it needs to early in a season, and it’s going to react a bit more to actual results. Basically, I’ve added a step to the the rankings process: after the rankings are determined, I go back and project previous games based on those ratings, and I adjust the ratings based on how much the ratings fit (or don’t fit) those results. The adjustment isn’t enormous, and it diminishes dramatically as the season unfolds.
One more recent change had the most impact, however: I made S&P+ more reactive to conferences as well. It’s similar to step 3: after the ratings are determined, I project previous games based on those ratings, and I track each conference’s average performance versus projection. For the top conference, I found that by the end of the season it was aiming low by two or three points per game per team. For the bottom conference, it was the reverse.
By shifting each team’s rating based on this conference average, and by increasing the weight of said adjustment as the season progresses, it adds basically improves against-the-spread performance by about one percentage point per season and cuts the average absolute error by somewhere between 0.2 and 0.3 points per game. That doesn’t seem like much, but look at the Prediction Tracker results and note how much of a difference 1% and 0.3 points per game could make to your projective ranking there. It’s pretty big.
It does, however, mean a fundamental shift in how mid-major teams are judged. Not to spoil the suspense, but look at the difference this adjustment made in some 2018 rankings:
- Fresno State: originally ninth, now 16th
- UCF: originally eighth, now 18th
- Utah State: originally 19th, now 21st
- Appalachian State: originally 11th, now 29th
It’s a pretty harsh adjustment, though it both makes the numbers better and perhaps passes the eye test a bit more. So we’re going with it.
Wait, so you’re including previous years of history in each season’s ratings? How could that possibly be right?
Back when I mentioned at the beginning of 2018 that I’d be using priors in the rankings all year, I had an interesting conversation with some readers on Twitter about what happens at the end of the year. Would I be removing the priors for the year-end ratings so that a) we’d be evaluating teams’ performance based only on what happened in that year, and b) the “recent history” prior wouldn’t then carry influence right into the next year’s projections (since they would include the priors that were included in the previous year’s rankings)?
It was a legitimate question, one to which I don’t think there’s a right answer. To me, it simply comes down to this question: when you’re looking back at previous seasons’ ratings, what are you looking for?
To me, it’s usually to get a sense of who might beat whom, right? I understand the draw of a “this year only” evaluative look, but S&P+ is intended to be a predictive measure, and I’ve decided to include these priors because they make the predictions better. In that sense, removing those priors at the end of the season makes it less of a predictive measure (even though obviously any “predictions” made of a team from 2006 in 2019 is theoretical only).
(How will this impact the preseason projections moving forward? The weights of each of the primary projection factors — recent performance, recruiting, and returning production — are always based on updated correlations, meaning that it depends on how strongly each factor predicts the next season’s performance. I’m dumping the new S&P+ ratings into the old engine, and, well, we’ll find out. Maybe the recent performance numbers end up with a lower correlation. We’ll see.)
On with the rankings.
2015 S&P+ rankings
|Team||Rec||S&P+||Rk||Off. S&P+||Rk||Def. S&P+||Rk||ST S&P+||Rk|
|Team||Rec||S&P+||Rk||Off. S&P+||Rk||Def. S&P+||Rk||ST S&P+||Rk|
|San Diego State||11-3||8.0||48||28.6||69||22.6||38||2.0||10|
|San Jose State||6-7||-4.5||89||26.8||80||32.0||91||0.8||38|
|New Mexico State||3-9||-18.6||122||24.5||96||42.2||125||-0.9||93|
“Alabama, Clemson, and Ohio State were the best teams in the country for most of the way...”
In my walkthrough of 2015 advanced box scores earlier this week, I may have generalized a bit too much:
On a macro level, focusing mostly on the national title race, 2015 wasn’t amazing. Alabama, Clemson, and Ohio State were the best teams in the country for most of the way, and it seemed like there was basically a battle for one extra CFP spot. When Ohio State lost, that made things a bit weird, but Alabama pretty quickly rebounded from a funky loss to Ole Miss, and neither semifinal was close.
From a perceptions standpoint, that was pretty much what I remembered. As soon as Deshaun Watson looked like Deshaun Watson (and especially after Clemson beat Notre Dame in the monsoon game), it was clear that the Tigers might be a national title contender. It was also true that Ohio State and Alabama were easily the top two contenders in the field.
But from a stats standpoint, it was at least a little bit blurrier than that.
For one thing, that Ole Miss team that had the “funky” win over Alabama? Really, really damn good. Their win in Tuscaloosa was powered by a stupid deflection TD, but you still had to be rock solid to be that close to beating Bama, and when Ole Miss was good in 2015, Ole Miss was terrifying.
They beat good LSU, Mississippi State, and Texas A&M teams by a combined 52 points. They beat bad UT Martin, Fresno State, and NMSU teams by a combined 174. They suffered a spectacularly fluky loss to Arkansas (every bit as fluky, and probably more so, than their win over Bama).
They also laid two eggs in a three-week span. First, still perhaps a bit hungover from the Bama win (they had to ease by an iffy Vandy the week after), they got buzzsawed by a Florida team that was about to nose dive in Will Grier’s absence. Two weeks later, they made Memphis quarterback Paxton Lynch a lot of money by getting lit up defensively by the Tigers. But they woke up after that, finishing 5-1 with the Arkansas fluke loss and humiliating Oklahoma State in the Sugar Bowl.
Ole Miss had one bad offensive game and one and a half bad defensive games. Per S&P+, this was the best Rebel team since 1961.
We know everything that followed — sanctions, escorts, Hugh Freeze fired, more sanctions, etc. And it’s a shame that a program’s high-water mark in modern football included a loss to Memphis. But Ole Miss was really, really good. One of the five best teams in the country and, perhaps, better than Clemson on average.
All, meet nothing
When I was walking through the 2015 advanced box scores, it was jarring how many amazing shootouts, and how many stultifying defensive battles, we had in this same season. Texas Tech games averaged nearly 89 combined points per game; Missouri games averaged under 30.
Obviously there are some extremes every year, but here’s an ode to the teams that tried as hard as they could to play only one half of a given football game. Here are the teams with the biggest differences between their Off. and Def. S&P+ rankings:
- Missouri: 120 spots (third in Def. S&P+, 123rd in Off. S&P+)
- Texas Tech: 117 spots (fourth in Off. S&P+, 121st in Def. S&P+)
- Boston College: 114 spots (fifth in Def. S&P+, 119th in Off. S&P+)
- Northwestern: 114 spots (eighth in Def. S&P+, 122nd in Off. S&P+)
- Vanderbilt: 104 spots (16th in Def. S&P+, 120th in Off. S&P+)
- Kent State: 98 spots (30th in Def. S&P+, 128th in Off. S&P+)
- Indiana: 84 spots (25th in Off. S&P+, 109th in Def. S&P+)
- Oregon: 82 spots (first in Off. S&P+, 83rd in Def. S&P+)
- Illinois: 81 spots (23rd in Def. S&P+, 104th in Off. S&P+)
- Tulsa: 76 spots (48th in Off. S&P+, 124th in Def. S&P+)
I watched the Mizzou-UConn game (9-6 Mizzou) and the Mizzou-Tennessee game (21-8 UT) in person. I watched the Mizzou-Vandy (10-3 Vandy) and Mizzou-Georgia (9-6 UGA — the second-worst 9-6 game Mizzou played that year) on television. At least I can now understand that I was watching something truly special.
The average Texas Tech game in 2015, by the way, lasted 3:41. Two games went over four hours (TCU, Sam Houston), and neither went to OT. At least my awful Mizzou games ended more quickly (3:16).
Nationwide regression to the mean
Right around when November 2014 began, the SEC was maybe at its most powerful level ever, as I discussed in the 2014 rankings post. But bowl season didn’t go all that well for the league, and then it fell back to the pack pretty considerably. Mizzou fell by 49 spots in S&P+, South Carolina by 47, Kentucky by 27, Auburn by 17, Georgia by 15, and Mississippi State by 11. Ole Miss, Arkansas, and Tennessee all improved, but it wasn’t nearly enough to make up for the stumbles.
Mind you, the league still stayed at No. 1 in these rankings, primarily because almost all the other power conference regressed, too.
Full-season average S&P+ ratings, 2015:
- SEC (+13.8, down 4.7 adjusted points per game per team from 2014)
- ACC (+9.6, up 0.7)
- Pac-12 (+9.3, down 0.3)
- Big Ten (+7.4, down 0.9)
- Big 12 (+7.3, down 2.3)
- AAC (-1.8, up 2.8)
- Mountain West (-4.8, up 0.7)
- MAC (-5.1, up 4.4)
- Sun Belt (-7.0, up 3.0)
- Conference USA (-9.3, down 2.9)
Four of five P5 conferences fell, and four of five G5 conferences rose. The top-ranked team (Ohio State) had a percentile rating of just 98.3 percent, the lowest for a top team since 2011. Everyone got a little more bunched together.
Spoiler: it wouldn’t last.