Comments: |
So, the agents are integrating signals from a number of sources (weighted, yes indeed) to determine their general alarm level. And once they hit their threshold, they act. That seems to me to be equivalent to what you've said above, just continuous instead of discrete.
I don't follow once we get to the infinite-state Markov chain. How does it have any implications for the distribution? In other words, it seems like the markov chain representation is just a way of making a histogram of the agents' thresholds... what am I missing?
Note also that one of the signals the agents integrate is "who else has already evacuated", which would violate one of the assumptions for markov processes, no?
Oh, the chain is just to think about it in a different way. The point is that if the person is truly Markovian, the most no-information idea is to wrap all of the X_i states together into a single state, with probability p of "run like hell" and probability 1-p of "stay." In this context, the distribution of thresholds is geometric, and making it normal is the odd thing, not somehow a default. [Sorry--somehow I thought I'd written that, but I guess I just thought it...]
Yes, "who's already gone" would violate the Markovianness.
Ah! Okay, gotcha. Yeah, I think people are definitely non-Markovian in their decision process.
I'll fool around with gammas some. (Hopefully, when we do some sensitivity analysis of the model, one of the things we'll find out is that the details of the distribution of thresholds is not especially important. That'd be convenient.)
There's a few decent handbooks on basic distributions that are a sensible thing to have on your shelf. I don't have any of them (I have a mathematical statistics book), but getting one of them is sensible. | |