Like I mentioned briefly here, it doesn’t make a whole lot of sense to try to characterize resilience using methods that spit out a single number–at least not from a theoretical perspective, at least not without without greater representational context methodologically. This is because of the complexity of community resilience. This complexity is only evident from a theoretical perspective. Empirically or methodologically, one could simply look at one variable or behavior or event over time and make some conclusion about resilience. Theory helps us know what many things to look for.
The complexity of resilience means it is severely hyper; resilience is hyper-dimensional. The community resilience model ResilUS (Miles and Chang, 2011, Frazier et al, 2013) has at least 45 variables–I might have lost count. And obviously this is just a model and by definition empirically incomplete (though I believe that ResilUS incorporates the most variables in the modeling literature). So to understand resilience, rather than simply determine whether a community is or isn’t resilient, we need to treat this hyper activity. And what does one use to do that?
Okay, actually there are many meds (methods!) for addressing hyper-activity.
Similarly, the analysts among us need to apply, experiment with, and develop methods for treating the hyper-activity of community resilience. Data (geo)visualization is one way to do this. I showed a somewhat brute force example of this using sparklines in that other post. I think there are some great applications for sparklines but really its not making resilience data less hyper, its giving you the full experience of the hyper-dimensionality. And sometimes hyperness makes you tired.
So how do you make community resilience less hyper? A suite of methods that can be used to reduce the number of dimensions is machine learning. Among these, principal component analysis is a simple way to reduce dimensions. PCA basically collapses data columns that are highly correlated into a single columns that account for some proportion of the variance in all of the data.
Once you’ve applied PCA to your recovery data, you can visualize it in a much more efficient way, as illustrated by the above plot. The plot above manages to squeeze in 19 data variables (dimensions) using six graphical dimensions (x1, y1, x2, y2, color, size). This data by the way was generated using ResilUS (Miles and Chang, 2011). Though I didn’t take any care in putting the input data together, so the data may not make logical sense.
With this plot you can see interesting things. Things like the association of the % of households that rent with the number of households that left their neighborhood. Or how single family homes (SFR) is associated with low damage and owner-occupied homes–in neighborhoods 19, 3 and 4.
Another thing you could use another machine learning technique to highlight neighborhoods that share similar resilience traits is k-means cluster analysis. The first step is doing PCA (it makes the results more manageable). Then you can make a similar plot, below, but visualize those clusters (in this case by color).
Okay, but how do you get a prescription for adderall to curb the hyper-activity of community resilience? Well, you can see how I created the plots using this sample data. In short, I use python (the best tool for the job, by far…and its free!) and some very cool libraries. You can grab my code and the sample data to play around with it. You’ll need to get yourself set up with python, including ipython. Just ask if you want some pointers to get going with that.