?

Log in

No account? Create an account
math - The Mad Schemes of Dr. Tectonic [entries|archive|friends|userinfo]
Beemer

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

math [Sep. 9th, 2006|10:40 pm]
Beemer
I am a super-mega-dork, because I stayed up until stupid o'clock last night working on a math problem.

Technology Review (which all MIT alumni get) has a puzzle column, and I was reading it last night and got sucked in to the raquetball problem, which goes like this:

In racquetball, if the server doesn't score a point, the other guy gets to serve. Whoever gets 15 points first wins. If the score is 13 to 14, the guy with 13 points is serving, and we assume that whoever serves has probability p of scoring a point, what does p have to be for it to be more likely that the guy with 13 points will win the game?

The answer is, "a little bit more than 55% or greater", and you totally don't care where that comes from, but I feel like explaining it anyway. Luckily for you, I'm nice and will hide it behind a cut rather than inflict it upon you involuntarily.

The thing that makes this tricky is the fact that if the server doesn't score a point, the serve passes to the other player, and it can go back and forth over and over as long as nobody scores a point. Figuring out how this translates into numbers (well, letters) is easier if you draw a little diagram showing the different possible states, like this:

(13)/14 <-- q --> 13/(14) -- p --> 13/15*
    \
     \
      -- p --> (14)/14 <-- q --> 14/(14) -- p --> 14/15*
                   \
                    \
                     -- p --> 15/14*


Notation: (13)/14 means player A has 13 points and player B has 14, player A is serving; the * indicates a final state where someone has won, and the arrows show the transitions between states with the probability of that transition.

So we can move from the initial state to 13/15* by going qp, or qqqp, or qqqqqp, and so on. We can get to player A winning by pp, or by qqqqpp, or pqqp, and so on. Recall that probability of X and Y = P(X and Y) = P(X)*P(Y), and P(X or Y) = P(X) + P(Y). Thus:

P(Player A wins) = (p + q2p + q4p + ...) · (p + q2p + q4p + ...)
and
P(Player A loses) = (qp + q3p + q5p + ...) + p (qp + q3p + q5p + ...)

Which we can rewrite as:

Condition for A winning = [p ∑n=0(q2)n]2 > pq(1+p) ∑n=0(q2)n

Now, since |q2|<1 (q is a probability, hence between 1 and 0), the sum is a geometric series, and it's equal to 1/(1-q2). Which gives us for the win condition:

p/(1-q2) · p/(1-q2) > pq(1+p)/(1-q2)

Cancel out a p/1-q2, multiply by 1-q2, and you can rewrite that as

p > q (1+p)(1-q2)

Now substitute back in q = (1-p), do a lot more algebraic rearrangement, and you end up with

p3 - 2p2 - p + 1 < 0

Does that give us an answer? Well, it's a cubic, so there's at least one real root, and p=0 gives us 1, while p=1 gives us -1, so it must cross zero somewhere between the two, which means there's a solution. Awesome!

Now, at this point, we could look up and plug numbers into the cubic formula, which is incredibly hairy, but it's much easier to just throw numbers into a spreadsheet real quick or google a cubic equation calculator, both of which will give you in short order a value of something like 0.55496. So player A is more likely to win than lose if p is greater than 0.55498. Hurrah!

I also found a covering of the chessboard with 12 knights, but I can't prove that's the minimum number you need, though I'm pretty sure it is.

EDIT: I found a site that has knight coverings! Optimality proofs down at the bottom.
LinkReply

Comments:
[User Picture]From: melted_snowball
2006-09-10 11:49 am (UTC)
Now I feel like I should send you the book chapter I just wrote. (A lot of it feels very much like this...)

Covering the chessboard is one of those things that a constraint satisfaction engine is really the right thing for, I think.
(Reply) (Thread)
[User Picture]From: dr_tectonic
2006-09-10 04:04 pm (UTC)
Does that mean I explained it reasonably well?
(Reply) (Parent) (Thread)
[User Picture]From: melted_snowball
2006-09-10 04:21 pm (UTC)
The "right" way is to set the system up as a Markov chain, and then ask "which of the two terminal states does the chain wind up in?" You can do that by inverting a matrix you get from the transition probability matrix of the chain.

So that is to say, there's a more general way that I would solve the problem, and understanding it can be very helpful. (In a domain I didn't know well, we used this kind of approach a while back to compute the probabilty of a certain kind of evolutionary network occurring in the most simple model of evolution.)

That said, your answer is a good way of explaining it if you're willing to talk about infinite series, but not Markov chains, which probably describes a much larger audience.

It occurs to me (and I'm thinking about this right now in another window...) that you could use this style of approach to answer a question I wanted to answer around 15 years ago, which is to determine the optimal strategy for fighting battles in Risk, and computing the victory probabilities.
(Reply) (Parent) (Thread)
[User Picture]From: dr_tectonic
2006-09-10 05:42 pm (UTC)
I actually had the notion of Markov chains (which I've never learned anything formal about) in the back of my head when I made the little state diagram.

So, let's see, if I were following the standard formalism, I'd use the diagram to make a transition matrix. If I number the states in the diagram 1-7 going in read order, I get something like this:
- q - p - - -
q - p - - - -
- - - - - - -
- - - - q - p 
- - - q - p - 
- - - - - - - 
- - - - - - - 


Is that right? What do you do with the inversion to get the solution?
(Reply) (Parent) (Thread)
[User Picture]From: melted_snowball
2006-09-10 05:59 pm (UTC)
That's close, but put 1's in the (3,3) entry and the (6,6) and (7,7) entries. That indicates that after player 1 wins, he's still won.

And, really, you don't care if the final score was 13-15 or 14-15, so join those two states together.

Now, let's call this matrix A. The initial probability distribution is the column vector v = [1,0,...,0]': that is, we always start in state 1. (True Markov chains can allow for uncertainty of the initial state.) A*v tells us the probability distribution after one step. A*A*v tells us the distribution after two steps. A1000000*v tells us what we really want to know, which is "what is the probability distribution that tells us where the chain is afer we've let it go for a really long time?" [Now it should be clear why we wanted those 1s on the diagonal in the transition matrix.]

So we essentially want to know what A*v looks like. I always get the formula wrong, but I think this is something like (I-A)-1*v, where I is the identity matrix. It's in a book on my shelf at the office...
(Reply) (Parent) (Thread)
[User Picture]From: melted_snowball
2006-09-10 06:01 pm (UTC)
Oh, right, and if you're unhappy about the whole "but I did it symbolically" thing, insert here a plug for Maple...
(Reply) (Parent) (Thread)
[User Picture]From: dr_tectonic
2006-09-10 06:36 pm (UTC)
Yeah, the 1s make it a lot easier to invert the matrix, too. (Durh!)

Neat! Thanks!
(Reply) (Parent) (Thread)
[User Picture]From: melted_snowball
2006-09-10 06:39 pm (UTC)
You're welcome. This doesn't quite rise to the level of "ohmygodsocool" for me, but it's close.
(Reply) (Parent) (Thread)
[User Picture]From: ciddyguy
2006-09-10 02:31 pm (UTC)
Mathamatical abstractions and such have never been my strong suit, never and there are times I do wish I had that ability, but on the other hand, give me an electronic gadget w/out instructions and I'll figure out how to use it in no time flat. :-)
(Reply) (Thread)
[User Picture]From: thatwesguy
2006-09-10 03:07 pm (UTC)
oh sure, spoilsport, lead off with the answer.

i am not reading your post, but rather am trying somewhat idly (and presumably continually less idly throughout the day) to solve your little puzzle.

ditto the knights thing, on which I think I might be close to tightening to lower bound from 8 to 10.
(Reply) (Thread)
(Deleted comment)
[User Picture]From: goobermunch
2006-09-10 07:29 pm (UTC)
Don't you mean:

*cough* *coughsuper-mega-dorkcough* *cough*

--G
(Reply) (Parent) (Thread)
[User Picture]From: helava
2006-09-10 11:14 pm (UTC)
Can't you just say, p=100 and be done with it? Does the problem explicitly state that it's the minimum probability they're looking for?
(Reply) (Thread)
[User Picture]From: dr_tectonic
2006-09-11 01:45 pm (UTC)
Yeah, it asks what range of values of p give A greater odds of winning than losing.
(Reply) (Parent) (Thread)
[User Picture]From: madbodger
2006-09-11 04:23 am (UTC)
But that's assuming that p and q have the same probability of scoring a point. My
assumption would be 13:14 (the score so far) instead of 1:1. I'd also weight the
probabilities that a point would be scored in 1, 2, 3... rounds asymptotically.
But then again I'm a geek-dweeb-nerd. Or a mathochist.
(Reply) (Thread)
[User Picture]From: dr_tectonic
2006-09-11 01:54 pm (UTC)
Sure, but that assumption that they have the same odds of winning is given in the problem.

Why adjust the probabilities in subsequent rounds? If you have a constant probability for server scoring a point, you get an exponential decay in the distribution of round lengths.
(Reply) (Parent) (Thread)
[User Picture]From: madbodger
2006-09-11 04:52 pm (UTC)
I realized that while musing over that this morning. You're right, of course.
(Reply) (Parent) (Thread)
[User Picture]From: thatwesguy
2006-09-15 04:40 am (UTC)
Next time, put the answer BEHIND the cut as well.

Meany.
(Reply) (Thread)
[User Picture]From: dr_tectonic
2006-09-15 08:32 pm (UTC)
Well, see, I put the solution behind the cut, figuring that the number itself wasn't really very interesting. The answer is more about the formula that gives you the number than whatever you get when you plug things into it, right?

Sorry, hope I didn't spoil it for ya.
(Reply) (Parent) (Thread)