The paper is "Cross-scale interactions, nonlinearities, and forecasting catastrophic events", by Peters, et al, in PNAS October 19, 2004, vol. 101, no.42. (Not that the layman cares.)
Basically, the article says that there's a particular pattern followed by many catastrophes that is based on how the system in question interacts with itself and the outside world. The really important underlying idea is that the rules the system follows will change as the system grows. The trouble spots are where the rules change, because that's where your predictions and your systems for keeping things in check suddenly no longer work.
I've done a little generalization here, because I think they're result is actually broader than they present it in the paper, but basically there are three types of behavioral thresholds. The first threshold is when one kind of internal feedback becomes dominant and overwhelms the other dynamics. Positive feedback is the only kind we care about, because negative feedback makes the system go away.
Okay, time for an example. Let's think about wildfire. When a wildfire first starts, there are lots of factors competing with each other. Dry plants burn better than wet plants. A breeze can fan the flames, but too strong a breeze will put them out. Air temperature makes it grow faster or slower. For a long time, the factors go back and forth and the fire might catch and it might burn out. But when it gets big enough and hot enough, a bunch of those factors become unimportant. The rules change, and now the fire can start to spread. One positive feedback loop (heat creates fire creates more heat) has taken over completely.
The second threshold is when the system's network connectivity changes. The perturbation or disruption spreads to a point where the connections to other pieces of the system change. The simplest form is percolation - independent clusters growing spatially until they touch and suddenly form one giant supercluster - but you can also have an event that grows until it reaches a physically different part of the system, or an event that causes the system to disconnect, breaking the network apart.
In the wildfire example, the fire will spread slowly and poorly from one patch of vegetation to another when they are separated by a region of low fuel. But if it can reach into the forest canopy, suddenly it's in a part of the system where everything is connected. The rules on how it spreads change. A crown fire spreads much faster than a ground fire, and all the separate patches become connected into one big fire. Or think about epidemiology: the SARS epidemic started when somebody in rural China first caught it, probably from a domesticated animal. The disease diffuses slowly among different small communities until it hits a large city - then boom, suddenly it's in Toronto, because the physical interaction connectivity is different. Big cities have airports with transcontinental flights: the rules have changed.
The system can cross a third threshold when the event grows large enough that it crosses scales and starts to interact with systems in the larger scope its embedded in. Effects that were previously too small to have an effect are no longer negligible, and again, the rules change as these extra dynamics generate new feedbacks between the system and the context its operating in. Normally the "outside" or embedding system is decoupled from the smaller system and can be regarded as a relatively uniform and constant environment, but when the disruptive event grows big enough, variations in the outside system become important.
Normally, weather acts as an outside driver of fire behavior. Wind and moisture speed or slow the spread of fire, and the fire is at the mercy of the atmosphere. But hot air rises, and if the fire gets big enough, the heat it produces will start to change the airflow around it. Now the fire is affecting the weather as much as the weather affects the fire. The fire creates its own weather, forming updrafts that pull oxygen into the heart of the fire, spreading sparks and embers, and pre-heating the fuels ahead of the fire's front. This can lead to "blow-ups", where the fire
suddenly leaps ahead burns so hot that everything catches fire, regardless of its fuel content or connectivity. The rules have changed again. Things that didn't used to matter now do. To understand the system, you now have to include a whole new set of dynamics that describe the upscope system's interactions with your system.
The paper also had some interesting examples involving the spread of disease and parasites (SARS, lyme disease, pine bark beetle infestations), desertification of grasslands, and engineering failures like the Tacoma Narrows Bridge collapse in 1940.
They presented the framework as a pattern for catastrophe of systems moving from threshold 1 to 2 to 3 as they grow spatially, but I think it's more general than that. I think that you can see these kinds of threshold-crossing all over, and this particular sequence is simply a common one related to catastrophes. Likely related to the emergence of catastrophes is the fact that you can often predict a system's behavior (even with simple linear extrapolation) within its current regime, but the place where it hits a threshold and changes its behavior is where your prediction will go wrong, and you will be surprised. That surprise can be worsened if you don't realize that the rules have changed and you keep trying to use the wrong tools to predict or react to the system. This framework gives you some hints about where you can start looking for thresholds before you actually run into them.
A lot of this is hopefully old news; for me, this is one of those "oh, of course" papers that tells you lots of things you already knew but hadn't really articulated. Many of the points come up at the edges of work on scale-free networks and complex systems, and I'm sure that a lot of it is in The Tipping Point (which I need to read). But I still enjoyed seeing it laid out in this way.