You can now find full, updated 2012 S&P+ rankings at Football Outsiders — overall, offense, defense. (Actually, all of the S&P+ rankings have been updated, but we’re going to keep focusing on one year at a time. It’s a long offseason, right?)
What changes have I made?
I’m including this blurb in each of these posts, so if you read it previously, feel free to scroll down to the rankings.
First, with a better chance to analyze which statistical factors are most consistent from the beginning of the season to the end, I made some slight tweaks in the weighting of each statistical factor (the short version: efficiency carries even more weight now). I also worked marginal efficiency and marginal explosiveness into the equation.
Then, I implemented the changes I made during 2018 for previous years. From each week’s rankings post:
- I changed the garbage time definition. S&P+ stops counting the major stats once the game has entered garbage time. Previously, that was when a game ceased to be within 27 points in the first quarter, 24 in the second, 21 in the third, and 16 in the fourth. Now I have expanded it: garbage time adjustments don’t begin until a game is outside of 43 points in the first quarter, 37 in the second, 27 in the third, and 21 in the fourth. That change came because of a piece I wrote about game states at Football Study Hall.
- Preseason projections will remain in the formulas all season. Fans hate this — it’s the biggest complaint I’ve heard regarding ESPN’s FPI formulas. Instinctively, I hate it, too. But here’s the thing: it makes projections more accurate. Our sample size for determining quality in a given season is tiny, and incorporating projection factors found in the preseason rankings decreases the overall error in projections. (For previous years, from before I actually released any sort of preseason projections, I found the most predictive success by keeping a layer of five-year history within the ratings. It’s a small percentage, but it’s in there.)
- To counteract this conservative change, I’m also making S&P+ more reactive to results, especially early in the season. If I’m admitting that S&P+ needs previous-year performances to make it better, I’m also going to admit that S&P+ doesn’t know everything it needs to early in a season, and it’s going to react a bit more to actual results. Basically, I’ve added a step to the the rankings process: after the rankings are determined, I go back and project previous games based on those ratings, and I adjust the ratings based on how much the ratings fit (or don’t fit) those results. The adjustment isn’t enormous, and it diminishes dramatically as the season unfolds.
One more recent change had the most impact, however: I made S&P+ more reactive to conferences as well. It’s similar to step 3: after the ratings are determined, I project previous games based on those ratings, and I track each conference’s average performance versus projection. For the top conference, I found that by the end of the season it was aiming low by two or three points per game per team. For the bottom conference, it was the reverse.
By shifting each team’s rating based on this conference average, and by increasing the weight of said adjustment as the season progresses, it adds basically improves against-the-spread performance by about one percentage point per season and cuts the average absolute error by somewhere between 0.2 and 0.3 points per game. That doesn’t seem like much, but look at the Prediction Tracker results and note how much of a difference 1% and 0.3 points per game could make to your projective ranking there. It’s pretty big.
It does, however, mean a fundamental shift in how mid-major teams are judged. Not to spoil the suspense, but look at the difference this adjustment made in some 2018 rankings:
- Fresno State: originally ninth, now 16th
- UCF: originally eighth, now 18th
- Utah State: originally 19th, now 21st
- Appalachian State: originally 11th, now 29th
It’s a pretty harsh adjustment, though it both makes the numbers better and perhaps passes the eye test a bit more. So we’re going with it.
Wait, so you’re including previous years of history in each season’s ratings? How could that possibly be right?
Back when I mentioned at the beginning of 2018 that I’d be using priors in the rankings all year, I had an interesting conversation with some readers on Twitter about what happens at the end of the year. Would I be removing the priors for the year-end ratings so that a) we’d be evaluating teams’ performance based only on what happened in that year, and b) the “recent history” prior wouldn’t then carry influence right into the next year’s projections (since they would include the priors that were included in the previous year’s rankings)?
It was a legitimate question, one to which I don’t think there’s a right answer. To me, it simply comes down to this question: when you’re looking back at previous seasons’ ratings, what are you looking for?
To me, it’s usually to get a sense of who might beat whom, right? I understand the draw of a “this year only” evaluative look, but S&P+ is intended to be a predictive measure, and I’ve decided to include these priors because they make the predictions better. In that sense, removing those priors at the end of the season makes it less of a predictive measure (even though obviously any “predictions” made of a team from 2006 in 2019 is theoretical only).
(How will this impact the preseason projections moving forward? The weights of each of the primary projection factors — recent performance, recruiting, and returning production — are always based on updated correlations, meaning that it depends on how strongly each factor predicts the next season’s performance. I’m dumping the new S&P+ ratings into the old engine, and, well, we’ll find out. Maybe the recent performance numbers end up with a lower correlation. We’ll see.)
On with the rankings.
2012 S&P+ rankings
|San Jose State||11-2||9.7||42||35.2||30||27.3||64||1.8||14|
|San Diego State||9-4||2.4||60||29.3||55||26.1||57||-0.8||85|
|New Mexico State||1-11||-33.3||123||14.5||119||46.1||124||-1.7||113|
The first amazing Bama team
By the time 2012 rolled around, Alabama had already won two of the last three national titles under Nick Saban. There was already some “Dynasty?” talk. The Tide’s 2011 defense was an outright masterpiece. We were well into the stage in which you felt hopeless against them from the opening kick.
In 2012, however, they found another gear.
Based on percentiles, here are the top six teams of the S&P+ era (2005-present) based on the new formulas:
- 2018 Bama
- 2014 Bama
- 2008 USC
- 2017 Bama
- 2016 Bama
- 2012 Bama
Now, it’s fun to note that three of the five Alabama teams in this batch didn’t win the national title — the Tide lost to Clemson in 2016 and 2018 and to Ohio State in 2014. (That ‘08 USC team didn’t even get a chance to play for the title.) But if you’re reading this, you probably understand how advanced stats work, just as you probably understand that the best team in a given year frequently doesn’t win the title in a playoff structure. (Note how many 4-seeds have won the College Football Playoff vs. how many 1-seeds have.)
Regardless of title winners, all five of the amazing Bama teams on this list have taken the field in the last seven seasons. The 2009 title winner ranked 29th overall in this sample (the horror), and the 2011 team ranked 17th. Alabama was scoring rings while still finding itself. In 2012, Alabama found itself. Terrifying.
Texas A&M beat this amazing Bama team
Here are the end-of-regular-season BCS rankings from 2012:
- Notre Dame (12-0)
- Alabama (12-1)
- Florida (11-1)
- Oregon (11-1)
- Kansas State (11-1)
- Stanford (11-2)
- Georgia (11-2)
- LSU (10-2)
- Texas A&M (10-2)
- South Carolina (10-2)
First of all, that’s SIX SEC teams with 10 or more wins. Not even sure how that’s possible. (I mean, clearly it was possible...)
Second of all, this list is proof that there’s pretty much no way that the hottest team at the end of the regular season would have gotten into a hypothetical College Football Playoff unless said hypothetical bracket included eight teams.
Texas A&M lost to Florida (fifth in S&P+) in its first game of the season, well before we had any idea what either the Aggies or magical quarterback Johnny Manziel were capable of. From that point forward, they would lose only to LSU (10th in S&P+) by five points while beating Alabama (first) and Ole Miss (26th) and slaughtering new conference foes Arkansas (35th), Mississippi State (46th), Auburn (55th), and Missouri (57th and not a new conference foe) by an average of 55-18. In the Cotton Bowl, they ran circles around a 10-win Oklahoma that finished fourth in S&P+.
After ranking 11th but finishing 7-6 in 2011, Mike Sherman’s final season (and A&M’s last year in the Big 12), A&M moved to the SEC, hired Kevin Sumlin, named Manziel the starting quarterback, and soared all the way to 11-2 and third in S&P+. Not a bad debut.
Actually, too good a debut. They could never again reach those heights — they got Manziel back in 2013, but the defense vanished, they hovered around 15th for 2013-14, slipped into the 20s in 2015-16, and then to 36th in 2017 before Sumlin was let go. He never had a truly bad year, but merely fielding solid teams means less when you showed you were capable of something far greater right out of the gates.
No. 9 vs. No. 15 for the title
There’s been a lot of talk through the years about how Ohio State handled its turn-of-the-decade NCAA punishment all wrong. If I have the story right, it goes like this:
- Instead of self-imposing a bowl ban for the 2011 season — in which interim coach Luke Fickell led the Buckeyes to a 6-6 campaign after Jim Tressel’s offseason ouster — the Buckeyes let the NCAA dictate the terms. They played in, and lost, a pointless Gator Bowl to Florida to finish 2011.
- They were then banned from the 2012 postseason. This proved just a wee bit costly when, in Urban Meyer’s first year in charge, they went 12-0.
If Ohio State hadn’t been banned from the postseason, the 12-0 Buckeyes could have moved to 13-0 with a Big Ten title game win over Nebraska, then played fellow unbeaten Notre Dame in the BCS title game. S&P+ suggests they would have been favored by four or five points in said contest. Meyer could have claimed a national title in his very first season, then doubled up two years later. (Or, conversely, with just a slight upset we could have ended up with the words “national champion Brian Kelly.”)
Hypothetically, this is quite true. But I’m glad it didn’t happen, and for two reasons:
First, if Ohio State had been eligible, then we would have been deprived of watching Wisconsin’s 70-31 humiliation of Nebraska in the conference championship. (Wisconsin only made the title game because both Ohio State and Penn State — both of whom beat the Badgers in November — were banned.)
Granted, Ohio State could have done something similar to the Huskers, but it’s something you occasionally expect from a team like the Buckeyes. Watching Wisconsin do it, with three different running backs topping 100 yards (and two topping 200), was uniquely thrilling, especially considering how mediocre the Badgers had been for large swaths of 2012.
Second, said OSU-ND title game would have pitted the ninth- and 15th-best teams of the season, playing for the national title. Gross. Both were unbeaten, sure, but according to second-order win totals, both should have been closer to 10-2. Ohio State won six games by one possession (while playing a schedule that ranked 53rd in SOS), and Notre Dame won five.
If No. 9 and No. 15 play at the end of a 64-team playoff or whatever (I guess that’s the equivalent of a 3-seed playing a 4-seed), then so be it. But in the “we can only pick two teams to play for the title” BCS, this would have just been gross. Gross, gross, gross.
Bama was the best team in the country, and it wasn’t particularly close, and the Tide got to play for and win the title because Ohio State screwed up its self-imposed sanctions. We’ll call that justice of some sort.
In each of Jimbo Fisher’s first three seasons in Tallahassee after replacing the retired Bobby Bowden, Florida State improved rather dramatically. From 37th in Bowden’s final season, the Noles improved to 17th in 2010 (an improvement of 10.9 adjusted points per game), then 10th in 2011 (up 3.1 points), then second in 2012 (up 8.0).
A fluky upset loss to a mediocre NC State threw us off the scent a bit after a dominant start to the season, but and a late-game fade against Florida showed the Noles still had another step or two of development to undertake. They finished 10th in the AP poll after beating Georgia Tech for the ACC title, then pulling away from NIU late in the Orange Bowl. A No. 10 finish suggested Fisher was doing well. The underlying numbers, however, suggested they were close to doing very well.
The next season, they would add a couple of new pieces on the offensive side — a certain redshirt freshman quarterback, for one — and improve dramatically once more. Just as S&P+ told you that A&M was close to a breakthrough in 2011, it told us that FSU was even closer the following year.
Power grabs galore
Here are your average S&P+ ratings by conference:
- SEC (+15.7, up 2.4 points per game from 2011)
- Big 12 (+13.5, up 1.1)
- Pac-12 (+9.5, up 2.0)
- Big Ten (+8.1, down 0.9)
- ACC (+5.5, up 1.6)
- Big East (+2.2, down 3.9)
- Mountain West (-5.7, down 0.4)
- Conference USA (-8.1, down 3.1)
- Sun Belt (-9.7, up 6.4)
- MAC (-10.6, down 2.9)
- WAC (-10.9, down 3.2)
In 2011, the conferences that supposedly won the first round of conference realignment didn’t really win on the field. In 2012, there were pretty clear surges and death throes.
The SEC added Texas A&M just in time for the Aggies to peak, giving the league an even higher ceiling. (The other addition, Mizzou, would help out the next season. In 2012 the Tigers were busy dealing with the hardest SOS in the country and an injured QB.)
The Pac-12 took a step forward in its second year with 12 teams. The ACC, which seemingly crippled the Big East (with later help from the WVU-stealing Big 12) by stealing Pitt and Syracuse, rose, too.
Meanwhile, the Big East replaced WVU with Temple and plummeted. The MWC held steady by adding Boise State and company to replace TCU, Utah, and BYU, but that crippled the WAC. And conferences that had moves they were trying to make but couldn’t execute, like the Big East and C-USA, saw their stock plummet.
(No conference realignment moves explain the Sun Belt’s sudden surge, by the way. That was just Arkansas State and UL-Lafayette taking enormous steps forward.)
Another type of separation was occurring, too — among what would become the Power Five conferences, four of five improved on a per-team basis. Five of the other six conferences regressed.