Good news, bad news.
Bad news first: Another damn unlucky week. S&P+ went 3-10-1 in games decided within four points of the spread — shades of the dreadful stretch between weeks 3-5. It’s been an incredible stretch of luck this year, and even during week 6’s massive surge (34-18 ATS), the luck only slightly flipped over to good.
Good news: S&P+ still went 26-25-1 because it’s still dialed the hell in. Absolute error sunk below 12 and improved for the fourth consecutive week: 14.2 in week 3 (bad), 13.2 in week 4 (decent), 12.9 in week 5 (solid), 12.4 in week 6 (good), 11.9 in week 7 (awesome).
This actually prompted a decision on my part: I typically have preseason projections completely phased out after seven weeks, but because S&P+ has been so dialed in, I elected to leave about 5% in. That won’t make a significant difference, but there’s no need to rush something when you’ve got things rolling pretty well.
So next week will be based 100% on 2016 results.
On to more important topics: Volatility! Again!
I’m guessing that it’s a bit too early for these to be reliable, and I haven’t finished setting up a simulation for 2015 yet, but like I said — we’re still in experiment stage. So we’ll press forward.
If we have an average (the S&P+ rating) and a standard deviation, we are in position to simulate. So what happens if we take every game on the Week 6 docket and simulate it 10,000 times? In theory, we can come up with different average and median scoring margins. We can also compare these margins to the Vegas spread and take a peek at how frequently Team A covers against Team B in these simulations.
See where I’m going with this?
In Week 6, teams given at least a 56% chance of covering (per Friday’s volatility experiment) went 19-1 against the spread.
That’s amazing. And completely, ridiculously unsustainable. But if anything can show the potential in this idea, it’s that.
So we’re once again going to press forward with the idea, even as I try to actually make the math more sound behind the scenes.
I’m at an interesting place here. I created a way to simulate each game thousands of times, and based on team ratings and week-to-week volatility, I was able to look at how many times each team covered. I don’t think the actual percentages match up at all (meaning, when it says Team A will cover 58% of the time, I don’t think it’s actually 58%) because I don’t yet know the true relationship between volatility and week-to-week predictions. That’ll hopefully come in the offseason (or in a few weeks, if I get the time).
That said ... there’s something here. Sorting games in order of Team A’s likelihood of covering certainly seems to give you some pretty good picks.
- Top 5 “most likely covers” of the week: 9-0-1 (95%) — 5-0 in Week 6, 4-0-1 in Week 7
- Games with at least a “60% chance of covering” (whatever that actually means): 9-2-1 (79%) — 4-0 in Week 6, 5-2-1 in Week 7
- Games between 56-59%: 19-8 (70%) — 15-1 in Week 6, 4-7 in Week 7. (Three of the losses in Week 7 were in that “within 4 points of the line” batch that went sour.)
In two weeks, the top five games on the “likelihood of covering” list either covered or tied. We’re still dealing with tiny sample sizes here, but the results have been too spectacularly positive here that we’re going to keep going.
But first, I need names. I think handicappers have already taken every possible football-sounding name for picks — five-star picks!, et al — but I’ll try to have some fun.
Here are your top five picks of the week in terms of likeliness to cover. (I’m not listing the percentages themselves anymore because I don’t think they’re accurate.)
- Massachusetts (+20) at South Carolina
- Minnesota (-17) vs. Rutgers
- Penn State (+19.5) vs. Ohio State
- Troy (-8) at South Alabama
- Texas A&M (+19) at Alabama
Here are three more that hit that 60% mark or higher (whatever 60% actually signifies):
- Colorado (+2.5) at Stanford
- UL-Monroe (+17.5) at New Mexico
- Eastern Michigan (+23.5) at Western Michigan
And here are the picks that fall in the 56-59% range (though, again, they’re probably not actually between 56-59%).
- Maryland (+2) vs. Michigan State
- Oregon State (+37) at Washington
- Old Dominion (+13.5) at Western Kentucky
- Fresno State (+16.5) at Utah State
- San Diego State (-23.5) vs. San Jose State
- Indiana (+2) at Northwestern
- Miami-Ohio (+5) at Bowling Green
- Texas (+2.5) at Kansas State
- North Texas (+18.5) at Army
- Temple (+7) vs. USF
- Wyoming (-4.5) at Nevada
- Colorado State (+2.5) at UNLV
- Auburn (-10) vs. Arkansas
- Kansas (+23.5) vs. Oklahoma State
- Michigan (-36) vs. Illinois
- Tulane (+10.5) at Tulsa
- Hawaii (+16.5) at Air Force
- Buffalo (+21.5) at Northern Illinois
- NC State (+19.5) at Louisville
This is a really intriguing concept, and I’m going to continue going down this road even as I also continue to make the math more sound.
Anyway, here’s the updated Google doc with all picks. (The FOURNETTE LOCKS and whatnot are not yet included — it’s still a pretty loose idea.) And, as always, here’s a completely useless embed, just for fun: