If I am going to make a wager, it is for one (and only one) of two reasons:
- I believe that wager has positive expectation, or
- My addictive personality, burdened by the monotony of daily routine, compels me to take ever increasing risk in an attempt to breathe life into the fragile reward circuitry of my depraved brain.
Since our world is awash with avenues to satiate the latter, our focus here at First and Thirty is expectation, combining machine-driven statistical models and probabilistic mathematics with real-world human reasoning to arrive at the wagers you need to place on game day to take your bookie’s money.
Our goal is to win over the long haul. And trust me, that haul can be long, especially in the highly volatile world of football. (American football. Not that European game that ends in 0-0 ties).
Accordingly, one of the most important claims one can make is the backtested performance of their strategy.
Unfortunately, backtesting is rife with dead ends and traps. If we put enough people in a room and have them flip coins, you’ll eventually find someone who seems to be able to flip heads every single time. That doesn’t make them an amazing coinflipper; it makes them a luckbox donkey.
The same can happen with spread wagers. If I make 100 spread betting models, or pick 100 handicapping “experts,” I’m sooner or later bound to find someone who must be a sports betting genius.
This can be a huge problem when statistical modeling, and it’s exceptionally easy to “overfit” a model by tweaking its knobs and dials until it gives you the “best” output one could hope for.
To get around this “overfitting” and get an accurate assessment of our expected win rate, we hold a chunk out from our data and ONLY use that chunk to evaluate our FINAL performance expectation (the data is stratified so as not to only include, say, games from the last X seasons). In dialing in the model for 2019, 20% of the data was held back, and the remaining 80% of data was used to train and tune the model.
Across all 500+ games that were held back for testing, the model produced a win rate of approximately 52.75%, exceeding the 52.38% required to beat the book. We believe that through careful game selection and subjective accounting for intangibles we hold a meaningful edge over Las Vegas.