Day 16 [LOC: 9/9]
More data origami. All the heavy lifting is done by the one multiply-nested function call at the end; the rest of it is just setting up the data structures. I feel like there ought to be function out there somewhere that would let you read a file full of arbitrary sets of key-value pairs in with a single call, reducing the problem down to two lines of code, but I wasn't able to find it. My guess is that it's because the input data is basically a simple database, and simple little databases like this are rare enough that nobody bothers writing code tuned to them; you always jump straight to developing the code you need to deal with a for-real database.
Day 17 [LOC: 6/3]
Oh, hey, combinatorics. R is for statisticians and statisticians care about probability and probability frequently involves combinatorics, so there are lots of functions for doing combinations and permutations and compositions and enumerations and I don't even know what-all. In this case, combn() is the magic word.
Day 18 [LOC: 19/1]
Oh, hey, it's Conway's Game of Life. Now, there are R packages specifically for doing the Game of Life and other cellular automata, but in hunting for them I found this article, which points out a nifty trick: to count up how many live neighbors each cell has, you can just add up a bunch of copies of the array, each shifted in a different direction. This is not only a lot easier to follow in code, in R the matrix operations are also a lot faster than iterating over all the cells. Hooray for vectorization!
Day 19 [LOC: 17/0]
Okay, this one is kinda crazy. We have a string and a bunch of rules for replacing a component of the string with something else. The first part is simple: you just run a search-and-replace over the string using regexps and count how many distinct results you get. But then in the second half, it asks: how many replacements does it take to get from a starting seed to the final string?
I was able to solve the second part of the problem in 0 lines of code. That's not a typo. ZERO. I didn't have to write any code at all. Once I figured out how to do it, I was able to solve it by hand. (For completeness, I also wrote it out in code before uploading to github; it's six extra lines.)
The key is realizing that you don't have to actually find a path from start to end. The puzzle only asks how LONG that path is.
Obviously, if you did want to find a path, you'd need to work backwards to keep things tractable: start with the solution and do reverse substitutions until you hit the beginning. You'd be able to merge equivalent branches as you went along, and hopefully a fair number of branches would dead-end in fairly short order. So I was setting things up to try that, and it ran for a long time without getting very far, so I concluded that I needed to try to prioritize the different replacement rules in some way. And as I was organizing them, I noticed a pattern.
There are a bunch of rules that create terminal symbols. These are symbols that have no replacement, so once they have been created, you're stuck with them. There are three terminal symbols: Y, Ar, and Rn. Ar and Rn are always created in matched pairs, and they are always accompanied by either 2 other symbols, 3 symbols and 1 Y, or 4 symbols and 2 Y's. So for a given number of Y's and Ar/Rn pairs, it doesn't matter which rules were used to generate them, it always adds up to the same number of replacements and the same number of symbols added to the string. AND, most importantly, all the other, non-terminal rules replace one symbol with two symbols!
So all you have to do is count the total number of symbols, the number of Y's, and the number of Ar/Rn pairs and do a little arithmetic to find out how many replacements it took to get to the target.
Day 20 [LOC: 17/13]
I believe the things this puzzle is looking for are highly composite numbers, although I'm not sure that they're precisely that. I couldn't google myself around to finding that term at the time, so I used a brute-force approach to find a solution, and it took forever. I think the right way to do this would be to construct a pool of good candidates by multiplying powers of small primes and test only those, instead of testing everything. But since they're not quite proper HCNs, there might be some pitfalls to dig your way out of if you took that approach...