Bruneau et al. (2003), as well as many great studies based on their formative ideas, define metrics of resilience based on the area under the curve representing the quality or quantity of some indicator over time. (Technically, they define resilience based on the size of the “resilience triangle” above the curve—see their Figure 1—but the inverse makes more sense to me and is more commonly adopted in the literature).

Like Bruneau et al. (2003), lets use *Q(t)* to represent some normalized indicator of quantity over time—say the number of hotel units available over time after the February 2011 earthquake in New Zealand. Two examples for *Q(t)* are shown in the middle graph of the figure at the top of this post, labelled “Quantity.” (Don’t worry about the subscripts *s* and *d* for now.)

With *Q(t) *we can define efficiency resilience—area under *Q(t)—*with the following equation.

Don’t let the integral squiggle scare you—just read it as “the area under the curve defined by everything to the right of squiggle.” The equation is interpreted simply as the larger the area under *Q(t)* the higher the resilience. The area under the curve of course is highest if there is no loss at all—the greatest efficiency possible requiring no recovery at all. If the area under *Q(t)* is normalized by the time after the hazard event, perfect resilience is represented as *R *= 1.0. If any loss is suffered, then *R* < 1.0.

This is the big, big contribution of the Bruneau et al. (2003) paper: They managed to give people what they want—a single number describing resilience—but show that this can only be done with knowledge about what happens with respect to time after a hazard event.

Even though people seem to demand a single, absolute value characterizing resilience *before* a hazard event, we don’t really know the resilience of a system until afterwards. The longer it has been after the hazard event, the more understanding we have of that resilience. In other words, resilience varies with time, *R(t). *and we can’t tie a single number to the concept for any particular place. This is illustrated by the fourth graph down in the figure at the top of the post, labelled “Efficiency.”

Why is that graph labelled “Efficiency?” Because the graph reflects the common view that the highest resilience is associated with the most efficient recovery relative to the amount of loss suffered from the hazard event. I call this perspective efficiency resilience—resilience means maximizing the efficiency of both loss and speed.

Most people who are interested in the dynamics of resilience stop at the equation above for *R(t)*. However, we can understand more about the recovery curve *Q(t) *with a bit more calculus. *Q(t) *tells us the cumulative amount of recovery that has occurred over time—say the number of customers restored. But one can’t set, monitor, and meet recovery goals without knowing the speed of recovery. It’s not just about getting everything back; it’s about getting as much back in some specific amount of time.

We can find speed if we take the first derivative of *Q(t).* Let’s call speed *V(t)*, which is illustrated in the graph labelled “Speed” in the top-most figure of this post.

The above equation is all that is needed to find the speed of recovery for whatever unit of time you want—say the number of customers per day. And, by the way, those “d”s in the denominator and numerator just mean “the slope of the *Q(t)* curve at a particular point in time.” Also, if you happen to have data on speed *V(t) *, rather than *Q(t)*, you can just find the area under *V(t). *

Another bit of calculus on *Q(t)* can help give insight into a common variable brought up when discussing resilience: adaptation. Adaptation, to me, is about how quickly you can make observations, learn, and make a new decision or take a new direction. You could call this “decision acceleration” because, as we know from physics class, acceleration, *A(t)*, is the first derivative of speed, *V(t). *This is more conveniently written as the second derivative (or slope of the slope) of the recovery curve *Q(t)*, like so:

Looking at the graph labelled “Adaptation,” we can see how quickly the speed of recovery is changing at any point in time. This is interesting because its possible to recover in the same amount of time in multiple ways. For example, one recovery scenario could be steady, with about the same speed every day, while another scenario could start out slow but build speed rapidly. The second scenario would exhibit more adaptation than the first.

Okay, so why are there two curves with subscripts *s* and *d* in all the graphs I’ve described so far?

Well, a majority of the literature on dynamic community resilience focuses on the supply side of recovery, such as building permits issued, customers with service, available jobs, or, as I mentioned above, hotel units on the market.

However, community resilience has two sides to it: the supply-side, of course, and a demand-side, as well. From the perspective of supply, community resilience is about minimizing loss and aiding the recovery of infrastructure provision as efficiently as possible. The demand side of community resilience is associated with consumption and is about loss and recovery of infrastructure consumption.

Hm. Resilience just got more complicated. We can’t just worry about the restoration, reconstruction, or recovery of supply; we have to think about demand, as well. More importantly, we have to think about the balance and whether it is sufficient for community well-being.

We need to be concerned with the adaptation (acceleration), speed, quantity (or quality) and efficiency of isolated recovery indicators. But we also need to put individual recovery indicators in context by analyzing the sufficiency of one indicator with respect to another—relationships of supply and demand. I call this form of resilience sufficiency resilience. Sufficiency explicitly incorporates both supply and demand.

Whereas efficiency is an absolute concept related to time, sufficiency is a relative concept that is related to the metabolic balance of what is enough or adequate. From the perspective of sufficiency resilience, we no longer need to worry about how to define when a community has “recovered.” We can focus on whether there is enough supply to meet demand or enough demand to warrant supply. We can talk less about when a community will recover but whether more can be done to adequately balance supply and demand of critical indicators.

You probably guessed that I’m going to show some math used to make the graph labelled “Sufficiency” in the figure at the top of this post.

You’re right! And, as far as I know, this is the first place that someone has attempted to write an equation to calculate a metric of sufficiency resilience. The math is a bit more complicated than what we’ve looked at so far. But hopefully it makes sense conceptually.

The goal of the calculus is to allow us to minimize the difference between supply and demand recovery trajectories, while simultaneously maximizing the area under both curves. The equation below expresses sufficiency resilience, *S(t)*, as the difference between the areas under the supply and demand curves, denoted by the subscripts *s* and *d*, multiplied by the average of the areas under both curves.

Another way to say the above equation is that sufficiency resilience is the difference between the efficiency of supply and demand scaled by the average efficiency for supply and demand.

Now that I’ve introduced the calculus of resilience, have a look at a few more recovery scenarios, shown in the figures below, and compare them to the one shown in the figure at the top of this post. Look in particular at how the calculus plays out with respect to time after a hazard event based on the interaction of supply and demand adaptation, speed, quantity, and efficiency.

Whereas the figure at the top of the post shows both supply and demand recovering relatively efficiently, the figure below shows both recovering relatively inefficiently.

But of course, its possible for recovery of one to be efficient, while the other is inefficient, as illustrated below.

Obviously, advancing our understanding of the calculus of community resilience gives us methods to model resilience before a hazard event, as well as gain some quantitative insight after an event. Just as importantly, the calculus serves to demonstrate the theoretical point that community resilience is an interplay between relatively static indicators or factors, such as race and gender, as well as extremely dynamic ones. The dynamism means that the math for comprehensively describing resilience can’t be that simple and must include time. The math is further complicated by the need to understand indicators of both supply and demand. Even so, the math is much less complex than the number of variables and interdependencies that it has to be be used to represent.

Oh, and if you want to play around with the calculus of resilience, here’s the code I used to make all the above figures. Let me know if there are mistakes or if you have ideas for revisions.