This seems weird.
I realize I don't understand your way of thinking about this concept. But let me propose a way of thinking that makes more sense to me, at least:
Let x be the rv that is the number of times a person needs to be warned before they decide to evacuate. [I realize that in your model, probably warnings have weights, but for my sanity's sake, let's say all "warnings" have weight 1.] A person's threshold is the number of warnings that it takes before they run.
It seems to me that what you actually have is just an infinite-state Markov chain modeling the people: it has states x_0, x_1, ... and RAN, where x_i's indicator i is the number of warnings the person has previously seen, if they've not evacuated yet, while RAN is the catchall state for "I ran like hell." You want to know the distribution of the time of first occupancy of RAN.
I seems odd to assume that this distribution is quasi-normal. More likely quasi-geometric sounds right to me: that would suggest that every new warning convinces the same fraction of stragglers to run.
So, the agents are integrating signals from a number of sources (weighted, yes indeed) to determine their general alarm level. And once they hit their threshold, they act. That seems to me to be equivalent to what you've said above, just continuous instead of discrete.
I don't follow once we get to the infinite-state Markov chain. How does it have any implications for the distribution? In other words, it seems like the markov chain representation is just a way of making a histogram of the agents' thresholds... what am I missing?
Note also that one of the signals the agents integrate is "who else has already evacuated", which would violate one of the assumptions for markov processes, no?
Oh, the chain is just to think about it in a different way. The point is that if the person is truly Markovian, the most no-information idea is to wrap all of the X_i states together into a single state, with probability p of "run like hell" and probability 1-p of "stay." In this context, the distribution of thresholds is geometric, and making it normal is the odd thing, not somehow a default. [Sorry--somehow I thought I'd written that, but I guess I just thought it...]
Yes, "who's already gone" would violate the Markovianness.
Ah! Okay, gotcha. Yeah, I think people are definitely non-Markovian in their decision process.
I'll fool around with gammas some. (Hopefully, when we do some sensitivity analysis of the model, one of the things we'll find out is that the details of the distribution of thresholds is not especially important. That'd be convenient.)
There's a few decent handbooks on basic distributions that are a sensible thing to have on your shelf. I don't have any of them (I have a mathematical statistics book), but getting one of them is sensible.
I think the assumption that the threshold is based on the number of warnings is flawed.
I suspect there's some fraction of the population that will run on first warning, and some that won't run at all, ever. But for the ones in between, the interesting threshold is probably the % of the population that already ran. As that happens, the odds of someone you know having run goes up sharply, and consequently your odds of doing so as well.
So I'd expect a small chunk at first, an accelerating rampup, and then a sudden plunge.
Well, I was already assuming (correctly, it appears) that his system was working based on some way of integrating who's already run into a number.
I'm actually debating whether to do it using a global "fraction evacuated" number or to do it using random encounters...
That's separate from the number of people you know personally who have evacuated, for which we need to make a simple model social network.
I suspect there's some fraction of the population that will run on first warning, and some that won't run at all, ever.
Absolutely correct. (Though there are some interesting questions about won't run versus can't run.)
But for the ones in between, the interesting threshold is probably the % of the population that already ran. As that happens, the odds of someone you know having run goes up sharply, and consequently your odds of doing so as well.
Yup. The really interesting part (I suspect) will be how the signal of other people deciding to act is filtered through people's social networks, because how your friends are acting has an entirely different kind of influence than how "everybody" is acting.
It'll probably usually be a sigmoid curve, but where it tops out and how it unfolds over time are really useful details we hope the model can provide.