Showing posts with label Fukushima nuclear plant. Show all posts
Showing posts with label Fukushima nuclear plant. Show all posts

Tuesday, March 22, 2011

The Square of Risk

In researching a chapter on risk triage for my book Creative Project Management, I came across a concept known as the PIVOT score. The elements of PIVOT are:

  • Probability — the likelihood a particular risk event will happen
  • Impact — the consequence of the risk event if it happens
  • Vulnerability — the relationship of the threat to core mission, values, and business objectives
  • Outrage — the expectation (E) of how things should be minus the degree of satisfaction (S) with the way things are.
  • Tolerance — the degree of enthusiasm or anger in response to the risk event impact if it happens.


Probability (P), impact (I), vulnerability (V), expectation (E), and satisfaction (S) each get a rating of between 0 and 3. The formula for outrage (O) is:

O = E – S

And the formula for tolerance (T) is:

T = (P x (I + V))O


Outrage, as you can see, is hyperbolic. It has a disproportionate impact of outrage on the final PIVOT score. Let’s imagine the following:

An event is moderately unlikely (P = 1), has a very high impact (I = 3), the event relates to our core business objectives (V = 3), but it’s unlikely to get much publicity because people aren’t too surprised when it happens, so E – S is only 1. The PIVOT score is  (1 x (3 + 3)1, or 6.

Now imagine that the impact is actually low, but it’s the sort of thing that will be smeared all over the headlines and every commentator will talk about it (O = 3). The PIVOT score is  (1 x (1+3))3, or 64! Even though the actual impact in the first instance is three times that of the second case, the PIVOT score of the less serious impact is more than ten times as high as that of the more serious case.

The impact of outrage on risk decisions tends to be disproportionate, especially when the outrage itself is the result of misinformation. Low impact risks take on catastrophic urgency and objectively more serious risks barely ripple the waters.

The confirmed death toll in Japan as I write is approaching 10,000, with the likely death toll predicted to top 18,000. Serious by any measure, but not outrageous because — hey, it was a huge tsunami and earthquake. Do you really expect all the safety procedures to be sufficient? Low outrage means not only less obsessive coverage, but also less pressure to improve safety.

The latest IAEA report I can find (March 17) lists a total of 44 injuries and no deaths. The UK Telegraph reports five workers dead, but I can’t confirm that, or whether they are part of or in addition to the 44. The level of relative outrage — expectation minus satisfaction — is off the wall.

Using outrage as the square (or higher power) of risk dramatically distorts decision-making. Do 9,000+ real deaths truly mean less than some uncounted but low number of potential deaths? In risk management practice, it often does. 

Where the outrage is, so goes the money and the effort. This is not always in our best interest.



From http://xkcd.com/radiation/.

Tuesday, March 15, 2011

Fukushima Number One

As reported in my article "Homer Simpson: Man of the Atom" in Trap Door magazine, I once got to run a nuclear reactor — admittedly, a low-power one used only for training students. This hardly makes me an authority on nuclear power, but I do know something about risk management.

Like many of you, I'm following the evolving Fukushima Dai-ichi Nuclear Power Station story with great interest. I'm a pro-nuclear safety conscious environmentalist, if that makes any sense. I think a lot of anti-nuclear sentiment is rooted in emotion rather than analysis, and contains the same anti-science bias that I object to so strongly when practiced by the right wing.

That doesn't make the case for nuclear power a slam dunk by any means. The downsides are obvious and substantial, and the tendency to rely on nuclear power generation to supply plutonium for other purposes has led to what seem to me to be false choices. I'm following with interest the discussion of thorium reactors, and I think the investment we're making in fusion is ridiculously low. That doesn't mean I don't like wind and solar as well. But all forms of power impose risks and costs.

The question in risk management isn't whether a proposed solution has drawbacks (technically known as secondary risks). Most proposed solutions, regardless of the problem under discussion, tend to have secondary risks and consequences.

The three questions about secondary risk that matter are:

  1. How acceptable is the secondary risk? The impact and likelihood of secondary risks can vary greatly. Some secondary risks are no big deal. We accept them and move on. Others are far more serious. A secondary risk can indeed turn out to be much greater than the primary risk would have been.
  2. How manageable is the secondary risk? A secondary risk, like a primary one, may be quite terrible if you don't do anything about it. The key word, of course, is "if." What can be done to manage or reduce the secondary risk? 
  3. How does the secondary risk compare to other options? As I've argued elsewhere, the management difference between "bad" and "worse" is often more important than the difference between good and bad. If the secondary risk of this solution is high, and if you can't do anything meaningful to reduce it, you still have to compare it to your other options, whatever they are.
In the case of nuclear power, the unmitigated secondary risk is unacceptably high. But all that does is demonstrate that the risk needs to be mitigated — reduced to some acceptable level. Ideally, that level is zero, but that may not be possible, and it may not be cost-effective to reduce it beyond a certain point. The leftover risk, whatever it is, is known as residual risk. Residual risk is what we need to worry about. Like with secondary risk, the three questions of acceptability, manageability, and comparison help us judge the importance of the residual risk.

We make one set of risk decisions at the outset of the project. We decide which projects we want to do; we decide what overall direction and strategy we will follow; and we decide what resources to supply. All the decision are informed by how people perceive the risk choices.

As the project evolves, the risk profile changes. Some things we worry about turn out to be non-issues, and other times we are blindsided with nasty surprises. Our initial risk decisions are seldom completely on target, so they must evolve over time.

When disaster strikes, suspicion automatically and naturally falls on the risk planning process. Were project owners and leaders prudent? Armed with the howitzer of 20-20 hindsight, the fact of what did happen carries a presumption of incompetent planning for those who failed to anticipate it. Sometimes it's a fair judgment. Other times not so much.

I'm still working out what I think about the Fukushima case, but some initial indications strike me as positive when it comes to evaluating the quality of the risk planning. The basic water-cooled design of Fukushima made a Chernobyl outcome impossible. The partial meltdown didn't rupture the containment vessel, and although the cleanup will be messy and expensive, it's not likely to spread outside the immediate area.

The effects of radiation may not be known for some time, but even those have to be put into perspective. Non-nuclear power plants, however, cost lives too, even though you don't hear about these disasters as often. A quick Google search turned up the following:

  • September 2010: Burnsville, Minnesota, explosion, no deaths.
  • February 2010: Connecticut, 5 dead
  • February 2009: Milwaukee, 6 burned
  • June 2009: Mississauga, Ontario

And, of course, several thousand people a year die mining coal.