Have you ever had the power go out at your home or your place of work? I think everyone’s hands go up at this question. What’s the longest you’ve gone without power? You in the back: 13 days?! Wow.
Personally, I’ve never gone more than a day. I’ve met the people who restore the power in my region; I’m quite fond of them and extremely impressed with what they do and how they do it. What do you think of your power company’s response to the outages you’ve experienced? You in the back: what about you?
What happened to you, your family, or your place of work as a result of those power outages and the time it took to restore electric service? How fast should power have been restored to avoid these impacts and make you like your power company better?
These are all questions that I’m interested in systematically understanding. This interest fits well with my conception of community resilience as linkages between community well-being, community identity infrastructure services, and infrastructure capitals. When the capital of power poles and substations are damaged, how is service restored temporarily and permanently? How does this effect our sense of identity (e.g. ability to our normal stuff). Ultimately, how does this all impact aspects of our well-being, particularly collectively as a community.
In August 2012, Hurricane Isaac landed in Lousiana. It never made it above a Category I and there wasn’t a ton of direct wind or flood damage, except for in St John the Baptist Parish and on the east bank of Plaquemines Parish. As a result, the news media weren’t very interested in the disaster and many folks probably don’t even remember it happened. For me, as the community disaster resilience nerd, I was quite intrigued when I saw the unique characteristics of the disaster. It was a strangely ideal case for studying the impacts and restoration from a major power outage. Very little else happened and the impacts of the power outage were easy to isolate. Luckily, the NSF was kind enough to fund a small project to satisfy my curiosity.
While Isaac was small, he was pretty darn scrappy. Isaac made landfall twice—took a lickin’ and kept on tickin’. The first was in Plaquemines Parish, occurring on the evening of August 28; the second was on the following day, just west of Port Fourchon. Isaac lingered over the Greater New Orleans area with winds over 30 mph for up to 54 hours. This, in case you don’t know, is unusual; it was twice as long as what has occurred with any other hurricane in the state.
Hurricane Isaac left nearly 900,000 (43%) Louisiana customers without electricity, with the peak outage occurring on August 30. This number is comparable to the power outages in Louisiana resulting from Hurricane Katrina in 2005 (800,000) and Hurricane Gustav in 2008 (1.1 million). (Isaac hanging out with the big girls and boys…) Multiple parishes experienced 90% of customers without power. In some locations, customers were without power for 10 days. Plaquemines, St Bernard, Assumption, Jefferson, and Orleans parishes experienced the highest percentage of customers without power for the longest number of days, with Jefferson Parish experiencing the most overall customers out. The table at the top of this post gives you a sense of the broad scale of this outage, as well as the range of restoration times. The table lists parishes in order of decreasing number of days that at least 3% of customers were without power. The red shading gives an idea of what most people experience (as opposed to select few that had super long outages). It shows the number of days at least 50% of customers were without power. (By the way, the table was put together by her.)
My NSF study focused on Orleans, Jefferson, St. John the Baptist, and Plaquemines parishes. Entergy Louisiana serves the latter three parishes and a portion of Orleans, while Entergy New Orleans serves the rest of Orleans Parish. (Confusing? Yes.) These parishes experienced relatively widespread outages with respect to number and percentage of customers without power: 161,802 (86%), 176,978 (85%), 19,443 (98%), and 11,870 (95%), respectively. The longest power outage occurred in Plaquemines Parish at the southeastern tip of Louisiana.
My awesome research assistants and I did quite a few things with the small pot of money. Lately I’ve been thinking about one set of data we collected through a phone/online survey of high level managers at businesses who lost power in the four parishes. (Well, somehow we ended up with five businesses that answered questions even though they didn’t lose power…it turns out that it was kinda helpful!) We got answers to some or all of 30 questions from over 250 businesses. I can’t recall what our response rate was—pretty average but awfully expensive for 250ish responses.
Don’t worry. I’m not going to write about analysis of 30 questions! I’ll spare you with just three, but they are a fascinating three questions:
- How many days was your place of business without power?
- What was the percentage of average monthly revenue lost as a result of the power outage?
- How would you rate the performance of Entergy in restoring power to your business? [Very Low (1), Low (2), Medium (3), High (4), Very High (5)
The first question represents the speed in which electricity service for each business was restored. The second question gets at the impact to financial well-being of each business. What is the third question measuring? Well, we should get some idea with the analysis below.
For all of these businesses, the only direct impact was from the loss of power. They didn’t experience any flood or wind damage. It’s like a laboratory experiment where the only impact from the hurricane is a loss of power. Then you can study all the ripple effects that spin off from just that one impact.
Pretty cool, I know.
So why those three questions? Basically, by looking at the answers to these three questions we can (hopefully) tell what annoys/impacts power customers (vis-a-vis businesses) more—the speed in which the power company restores the power or actual impact to the customer—in this case, revenue loss to the business.
Here’s a fun collage of subplots to give you a sense of the answers from the three questions. (I’ll call them variables from now on.) The diagonal subplots are histograms showing the number of customers with similar answers to each question. The other subplots show the one-to-one relationship between each variable. For example, what happens to revenue with increasing number of days without power? You might realize that there are less than 250ish data points in the plots. Its true. I took out all respondents who didn’t answer all three questions, which results in about 100 responses. Lots of people didn’t know or wouldn’t look up their revenue loss. Alas.
You should notice two things pretty quickly.
First, the distributions for the three questions are quite different. The “Days Out” distribution is uniform (this was by design). The “Revenue Lost” distribution seems something like a gamma distribution (ignore me, if this means nothing). The “Performance Rating” distribution is pretty damn normal (bell curve). So right away this tells us something.
Second, well, its kinda tough to tell if there’s a great correlation between any of the three variables. There certainly weren’t businesses saying they lost “lots” (its not a ton) of revenue if their power was only out a couple of days. But even after a week without power, lots of businesses didn’t lose much revenue, though a few did. Similarly, no one rated Entergy (the power company) poorly if their power didn’t go out or was back on in a day. But the point remains, that there is a lot of noise and variability in the data.
I’m going to try to filter some of it out and see if we can see a stronger pattern.
We can look at the same information in a more compact way by calculating a (Spearman) coefficient to see how strongly correlated the variables are. 1 means they are perfectly related in a positive way (they both increase) and -1 means a perfect negative correlation (one goes up, the other down). 0, of course, means no correlation. The table below shows the correlation coefficients between the three variables.
It says about what we saw in the scatter plots above: there are some correlations, but they aren’t really strong (not that close to 1). It is interesting that the performance rating is more strongly correlated with the number of days out than with the amount of revenue lost. In other words, customers might get more upset simply because the power is off, even if it had no measurable impact to them.
And just in case you’re in a hurry, that is the final conclusion of this post. However, the results up to this point may not be that convincing to yourself or others. It’d be nice to gain a bit more confidence in what the data might be saying. Plus, I have lots of cool graphs to show!
One thing I did was to split the data set up into two roughly equal samples based on the number of days without power. About 50% of survey respondents had power back in less than 4 days; 50% in greater than or equal to 4 days. Take a look at the histograms below comparing the customers who had power back relatively quickly and those who had power out longer. The distributions between these two groups are pretty different. Most folks that had power back quickly didn’t lose much of any revenue and only four were mean enough to say that Entergy did a very poor or poor job at restoring power. Those that were without power longer weren’t equally screwed or ticked off, but clearly the probability was higher than for those customers who had power back quickly.
The below set of histograms were created similarly, except this time I split the data set in two based on revenue lost. About 50% lost less than 1% average monthly revenue and about 50% lost more.
The histogram for “Days Out” is pretty compelling. Fewer and fewer businesses lost less than 1% as the number of days without power increases. At the same time, the number of businesses that lost more than 1% increases with more days out. It’s also pretty obvious that those that lost a chunk of change were much likely to rate Entergy poorly, but maybe not as much as you’d think.
The trouble with splitting the data sets up like I did above is that we lose the interaction between all three variables at the same time. Clearly, both variables have an effect on the performance rating, but it doesn’t say how they impact the performance rating in combination with each other.
In another post, I talked about how I’m a fan of principal component analysis (PCA) to help wrestle out insights from hyper-dimensional data. Well, in this case, I’m using PCA, together with cluster analysis, to wrestle out insights from noisy data. I’m not going to explain PCA here; you can read my other post and check out wikipedia if you’re really interested. The big thing for now is just that it simplifies data and, hopefully, allows you to see patterns you wouldn’t be able to see otherwise.
Really, PCA is just fun to do and help make pretty graphics. Each circle above is a respondent to the survey (a data point). The darker the green, the more revenue was lost. The darkest green is equal to the maximum revenue lost of all survey respondents (10%). The larger the circle, the more days without power. The largest circle is equal to the longest time without power in the data set (7 days). The rating of Entergy’s restoration performance given by each respondent is given as a number (1 to 5, low being bad) at the center of each corresponding circle.
Three variables; one graph!
So are there any patterns in the graph above? Yup! As you go right to left, the number of days out increases, which explain the most variance in the data. There are no 1s for small circles; there are no 5s for big circles. You can see diagonal bands of similar shades of green—relatively equal amounts of revenue loss. And while there are no 4s or 5s on dark circles (high revenue loss), you get the whole range for light circles. Lastly, as you go from the lighter bands of circles to the darker bands, there is a smaller range of days out (circle diameters) and that range tends towards the high side (longer outages).
What are those colorful rings around all of the circles, you ask? They identify clusters within the data, based on some interaction between the three variables ‘Days Out,’ ‘Revenue Lost,’ and ‘Performance Rating.’ I found these clusters with a k-means algorithm. I chose to find 10 clusters (you can choose any number you want, theoretically), which identify similar groupings of values between the three variables.
It’s these clusters that will help us remove the noise.
We can find the essence of each cluster by finding the mean value of each variable within all 10 of the clusters (and other statistics, if you want). The below table shows just that. You’ll notice that I sorted the table so that the cluster with the lowest average performance rating is on top, with increasing ratings as you go down the table. Do a quick comparison of the three variables now. The patterns become much clearer. The performance rating almost perfectly increases as the number of days out goes down. It’s not so perfect between revenue lost.
Of course, we don’t have to use our eyes. We can calculate those correlations coefficients again. That’s what the table below is.
Voila! There is a nearly perfect negative correlation between days out and performance rating; not so much with revenue lost. In other words, taken together, days out has higher influence, but there is some interaction with revenue loss in predicting performance rating.
Compare this correlation coefficient table with the one at the beginning of the post. It was tough to draw a hard conclusion about what the noisy data was saying. We thought we could understand the relationships but the correlations just didn’t seem strong enough to get excited. But after all the fancy data cleansing we did, I think maybe we can be excited.
Overall, businesses surveyed were much more likely to give a poor rating for Entergy’s work simply because the power was out for a while. The fact that a lot or a little revenue was lost, doesn’t influence their perception of Entergy’s performance nearly as much. Further, there’s even less of a correlation between the amount of revenue lost and the numbers of days without power than shown in the previous correlation table. Increasing number of days without power doesn’t seem to have much correlation with financial well-being of businesses—at least for this case.
So what gives? Well, two things off the top of my head. First, its just statistics on data from a small sampling of businesses (albeit fairly representative), none of whom lost that much revenue. Second, I think our collective identity is so strongly tied to having electricity and the wonderful benefits that service gives us, that we are likely to get upset simply because losing power gives us an identity crisis. I’m not saying that this doesn’t have impacts on our well-being, because satisfaction is part of well-being and of course other negative impacts to well-being are possible.
My point is ultimately that customers’ opinion on whether a power company restored electricity fast enough is not really about physical or financial harm “caused” by the loss of power and the power company’s “poor” performance. It’s wrapped up in a complicated set of entitlements, expectations, and norms that we privileged folks have. (If you are interested in this issue, I’d suggest reading this book and that book.)
This is something to consider the next time people scream for some power company’s CEO to be fired because power wasn’t restored “fast enough.” Oh and how fast is fast enough? It appears from this analysis that to get a positive rating for their performance a CEO better ensure that power is back on within two days.
Yikes. Glad I’m not responsible for getting the power back on.