||[Apr. 16th, 2019|10:26 pm]
I spent last week at an in-house software engineering conference, and I learned a lot of interesting stuff about machine learning. I think I know how you would use ML techniques to do bias correction, which is pretty neat. (You'd use the same technique they use to teach a system how to make pictures of horses look like pictures of zebras.) The question is, what happens when you start to extrapolate a little bit? If you start giving it pictures of ponies and mules instead of horses, does it still behave sensibly? Or does it go completely off the rails?|
At the same time, I managed to push my entire dataset through the multivariate bias-correction machinery and get it published, so that yesterday I could go into the reporting app for one of our grants and tick the "complete" box on that task by the due date. So that made me feel pretty accomplished. And now I can stop thinking about it and get my brain back!
I was able to pull it off because the conference was a very mixed bag in terms of my interest levels in the different talks, so I had plenty of time when I could focus on the bias correction work. Plus, I already had the machinery built, and there are a number of steps that take a while, so there was a lot of "okay, start step 3 running then wait 20 minutes to see if it worked." I did also spend a decent chunk of time working on it this last weekend, too.
Now I don't have any looming deadlines and I hardly know what to do with myself. Time to update the to-do list, I guess.