So I spent all my time at that meeting working on the research for AGU, which is what I'm here for now. I'm still working on my presentation, but after many long hours, I finally have my code for the multivariate bias correction working. Yes! Finally!
Travels today were smooth and boring. I'm staying at a (brand-new) Residence Inn that's right across the street from the convention center, which itself sits athwart the Metro station, so that's about as maximally convenient as it gets. On checking in, I realized that it's actually a little tiny apartment. It has a living room and a little kitchen! Which is actually stocked with implements of cooking and usable for the preparations of comestibles!
So I wandered over to the Safeway a few blocks east of here and picked up groceries for several meals. I figure it'll be nice to just run back to the hotel for lunch and not have to fight the crowds. (It's tricky to buy groceries when you want about a week's worth of food but nothing left over. And are trying to stick to a low-carb diet.)
I also got dinner at Sweetgreens, which is a great fast-casual chain that does big hearty salads. Thanks to Ian for pointing it out to me on the previous trip!
P.S.: Here's the description of how the multivariate bias correction works that I came up with while pondering whether I could condense it all into a single sentence or whether it would implode under the density of the jargon:
Using an augmented moving window, we detrend the data with a broken-stick regression, apply a rank transformation to extract the copula, bias-correct the copula using repeated quantile mapping of random orthogonal rotations, transform the probabilities back to variable values via a kernel density estimate of the observational cumulative distribution function integrated using the trapezoid rule, and finally restore the trend with an offset equal to the mean bias during the overlap between observations and the historical simulation; for precipitation, we also work in log space, replace sub-trace values with random uniform noise, and apply a final distribution mapping and mean scaling step at the end.
Isn't that delightfully opaque?