Now I feel like I should send you the book chapter I just wrote. (A lot of it feels very much like this...)
Covering the chessboard is one of those things that a constraint satisfaction engine is really the right thing for, I think.
Does that mean I explained it reasonably well?
The "right" way is to set the system up as a Markov chain, and then ask "which of the two terminal states does the chain wind up in?" You can do that by inverting a matrix you get from the transition probability matrix of the chain.
So that is to say, there's a more general way that I would solve the problem, and understanding it can be very helpful. (In a domain I didn't know well, we used this kind of approach a while back to compute the probabilty of a certain kind of evolutionary network occurring in the most simple model of evolution.)
That said, your answer is a good way of explaining it if you're willing to talk about infinite series, but not Markov chains, which probably describes a much larger audience.
It occurs to me (and I'm thinking about this right now in another window...) that you could use this style of approach to answer a question I wanted to answer around 15 years ago, which is to determine the optimal strategy for fighting battles in Risk, and computing the victory probabilities.
I actually had the notion of Markov chains (which I've never learned anything formal about) in the back of my head when I made the little state diagram.
So, let's see, if I were following the standard formalism, I'd use the diagram to make a transition matrix. If I number the states in the diagram 1-7 going in read order, I get something like this:
- q - p - - -
q - p - - - -
- - - - - - -
- - - - q - p
- - - q - p -
- - - - - - -
- - - - - - -
Is that right? What do you do with the inversion to get the solution?
That's close, but put 1's in the (3,3) entry and the (6,6) and (7,7) entries. That indicates that after player 1 wins, he's still won.
And, really, you don't care if the final score was 13-15 or 14-15, so join those two states together.
Now, let's call this matrix A. The initial probability distribution is the column vector v = [1,0,...,0]': that is, we always start in state 1. (True Markov chains can allow for uncertainty of the initial state.) A*v tells us the probability distribution after one step. A*A*v tells us the distribution after two steps. A1000000*v tells us what we really want to know, which is "what is the probability distribution that tells us where the chain is afer we've let it go for a really long time?" [Now it should be clear why we wanted those 1s on the diagonal in the transition matrix.]
So we essentially want to know what A∞*v looks like. I always get the formula wrong, but I think this is something like (I-A)-1*v, where I is the identity matrix. It's in a book on my shelf at the office...
Oh, right, and if you're unhappy about the whole "but I did it symbolically" thing, insert here a plug for Maple...
Yeah, the 1s make it a lot easier to invert the matrix, too. (Durh!)
You're welcome. This doesn't quite rise to the level of "ohmygodsocool" for me, but it's close.
Mathamatical abstractions and such have never been my strong suit, never and there are times I do wish I had that ability, but on the other hand, give me an electronic gadget w/out instructions and I'll figure out how to use it in no time flat. :-)
oh sure, spoilsport, lead off with the answer.
i am not reading your post, but rather am trying somewhat idly (and presumably continually less idly throughout the day) to solve your little puzzle.
ditto the knights thing, on which I think I might be close to tightening to lower bound from 8 to 10.
Don't you mean:
*cough* *coughsuper-mega-dorkcough* *cough*
Can't you just say, p=100 and be done with it? Does the problem explicitly state that it's the minimum probability they're looking for?
Yeah, it asks what range of values of p give A greater odds of winning than losing.
But that's assuming that p and q have the same probability of scoring a point. My
assumption would be 13:14 (the score so far) instead of 1:1. I'd also weight the
probabilities that a point would be scored in 1, 2, 3... rounds asymptotically.
But then again I'm a geek-dweeb-nerd. Or a mathochist.
Sure, but that assumption that they have the same odds of winning is given in the problem.
Why adjust the probabilities in subsequent rounds? If you have a constant probability for server scoring a point, you get an exponential decay in the distribution of round lengths.
I realized that while musing over that this morning. You're right, of course.
Next time, put the answer BEHIND the cut as well.
Well, see, I put the solution behind the cut, figuring that the number itself wasn't really very interesting. The answer is more about the formula that gives you the number than whatever you get when you plug things into it, right?
Sorry, hope I didn't spoil it for ya.