Backtesting Models and the Desire to Cheat or KNUTSON Y U SO DUM?!

I have been backtesting some models recently to try and achieve significant N so that I can either move forward with confidence on how the models perform, or scrap them/adjust them and re-run the tests. Unfortunately, I am not a programmer. When I switched my major in college from Microbiology, I basically had two choices. I could change to computer science or politics + economics. I chose the latter. Then when I dropped out of my Ph.D program in International Political Economics I, uh…

Well, I got a job in the computer industry.

KNUTSON Y U SO DUM?!

Once upon a time I was reasonably proficient with SQL, which helped me do DBA work (my real love is data), but I never managed to turn myself into a script kiddie with Python, etc, like I probably should have. This is unfortunate, because those skills would be incredibly useful now. I don’t have them, so brute force is my only option.

Backtesting football seasons – by hand – is a process. Each one takes me about three hours of tedium, half of which is a math test. Learn to code early, kids, then you won’t have this problem.

Anyway, as part of this testing, I found myself wanting the model to win. This is a problem you won’t encounter when you test strictly via computer, because the computer does not give a shit. You give it the data, it does the math, and then returns the math to you where you can apply caring however you want. However, as a human, I was secretly trying to insert bias to help the model perform better each time I did the comparisons. It’s my model, I want it to win. Now don’t get me wrong – I have other models, some of which are competing against this model to see which is more efficient. For right now, despite my best efforts to convince it otherwise, my subconscious wants this model to win.

This is monumentally stupid thing to do.

Logically, I know this. But it kept happening!

Look brain… this is testing. Testing is important. What we (we’re in this together, buddy) really want to get from this is to learn, as accurately as possible, how the model will perform in the real world given the parameters of application. Bending these parameters to help the model win more in certain situations while ignoring them in others is just as dumb as you can possibly be.

And yet… I found myself waffling about it.

*Glances at outcome that would yield a win* “That’s close enough that I would make that bet.”

*Glances at a different outcome that would cause a loss* “Oo, no, that falls just outside – I definitely would not make that bet.”

I have strict parameters. There is no grey area. Waffling ist strictly verboten!

Most of the time, I am not this dumb. I swear. But here I am, knowing that this is bad and knowing that it will probably cause poor expectations and potentially lost money in the long term, and I’m still fighting with myself, introducing biases that are counter-productive.

Why?

Probably because I’m hyper-competitive and want the model to win the mini-game as well, but who the fuck knows? It just needed to stop.

The solution to problem was as incredibly simple as the problem itself was incredibly stupid. I hid the columns that contain the results, then did the wager testing, then revealed the results and did the grading. It takes maybe an extra ten minutes of my time, but critically, avoids bias.

The lesson for today is this: Be careful doing your back testing when you can peek into the future. The temptation to cheat is overwhelming.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s