The Risk of Statistical Risk

Author: · June 28, 2008 · Filed Under Conferences, Estimating, Risk, Thoughts  - 4 Comment(s)

I was at the SCEA (Society of Cost Estimation and Analysis) conference this week.  Some of the buzz was about risk, both talks given at the conference and the ongoing risk arguments.  For several years the risk gurus have been lining up to show how to do more robust risk analysis.   While I would not say they are getting carried away I would say i get concerned with the differences of opinion and the numerous options provided by smart people.

One of my heroes in risk, Dr. Steve Book or MCR points out that risk analysis should include correlation.

One of my other risk heroes, Evin Stump (of Galorath), points out that defining correlation properly for a work breakdown of any size can involve of thousands of correlation entries. For example a 500 element WBS has 124, 500 correlations and a 1000 element WBS has 499,000 correlations.  Dr. Book doesn’t point that out but he does say “use .2″   That solves the hundreds of thousands of correlations issue.  But according to Evin that doesn’t provide more accurate risk analysis.  Evin points out “if two or more risky items are not statistically independent, a Monte Carlo simulation that fails to account for their correlation will underestimate their combined risk”  He then asks “what if you overestimate correlation?”  Hmmm could it be that .2 correlations could overestimate some systems.  Evin also points out how difficult it is to actually determine correlation…. For example, what is the correlation between a light bulb and a light bulb on/off switch… Probably near zero but most people would be tempted to assign a high correlation.

Chris Hutchings (of Galorath), another risk guru , in his talk “Risk is not a 4 Letter Word”, does a nice job of explaining risk and why the risk used inside of SEER ‘s internal Monte Carlo analysis is sufficient.  And at least it is understandable.  And a risk guru (Air Force Risk PhD’) reviewed the SEER concept when it was first developed. The conclusion: “you could spend a lot more time on risk but the answer wouldn’t be any more accurate.  What you did is more than adequate for an estimate”

The most “interesting” talk on risk pointed out that if you run an 80% risk analysis you are really getting 90% probability and to do 80% you need to run risk at 70% probability.  Try to explain that to management.

SEER does risk internally, either with a quick approximation or with a full Monte Carlo analysis.  It also integrates with standalone risk tools so people can provide whatever risk they method they would like.  I find it somewhat amusing when some people criticize the SEER internal risk approximation… I once showed a risk guru how the SEER approximation was within a few percent of a full blown Monte Carlo.  The risk guru begrudgingly admitted this but then said it just isnt derived in the right way.  Of course computing the “right way woud take many seconds of compute time each time an einput was made, making SEER frustrating to use. As a software guy by background I am always thrilled when we can provide useful results within key performance constraints.

I think we would do well to provide more meaningful information to management and other stakeholders than to spend our time arguing about the “best” risk methods.  The cause & effect analysis suggested by Evin Stump (See Evin’s comment on this blog entry) and the risk register tie in suggested by Chris Hutchings are certainly great steps forward.

Thank you for reading “Dan on Estimating”, if you would like more information about Galorath’s estimation models, please visit our contact page, call us at +1 310 414-3222 or click a button below to ask sales questions, sign up for our free library or schedule a demo.

Comments

4 Responses to “The Risk of Statistical Risk”

  1. Evin on June 30th, 2008 5:35 pm

    More on the overrun crisis
    Evin 29 July 2008

    In my local paper this morning there is an article by Phillip Taubman of the New York Times News Service. It’s titled “Pentagon frets as engineers shun military.” The sub-title reads “A lack of skilled personnel fuels cost overruns and missteps in high tech projects.”

    The article notes that “even as spending on new military projects has reached the highest level since the Reagan years, the Pentagon has increasingly been losing the people most skilled at managing them.” The article blames this as a major reason for the current rash of overruns.

    In the Air Force, according to the article, the number of civilian and uniformed engineers on the core acquisition staff has fallen 35 to 40 percent over the last 14 years. This in the face of Pentagon plans to spend $900 billion on development and procurement in the next five years, including $335 billion on major new systems. Quite a brain drain.

    The article reports that Carl Levin, chair of the Senate Armed Services Committee, said that cost overruns on military projects have “reached crisis proportions.” He has called for creation of an internal Pentagon office to oversee costs (does that mean that nobody is overseeing them now?). Further reported is that a recent GAO study of 95 military projects worth $1.6 trillion reported cost overruns of $295 billion, or 40 percent, and an average delay of 21 months. Getting much of the blame is deficient engineering management. (Forty percent? The average used to be around 30%)

    According to the article, a retired guy named Dr. Paul G. Kaminski is leading a high level task force, visiting university campuses and military contractors, to push for better engineering management. His task force was organized by the National Research Council, an arm of the National Academy of Engineering. It is working with something called the Air Force Studies Board.

    A report from Kaminski’s group scolds the Air Force for “haphazardly handling, or simply ignoring, several basic systems engineering steps.”

    • Considering alternative concepts before plunging ahead with a program
    • Setting clear performance goals for a new system
    • Analyzing interactions between technologies

    Here are some program-specific criticisms:

    • In a satellite system designed to detect foreign missile launches, the design calls for two sensors that cannot operate simultaneously on the same spacecraft without expensive, costly shielding to prevent electromagnetic interference from one disabling the other
    • In Future Combat Systems, development was begun before performance requirements were resolved
    • The Pentagon started building a complex network of communications satellites without a coherent plan for integration with an existing system or a consistent set of requirements to accommodate the needs of the four military services.
    May I offer some comments on all of this?

    Those sold on Monte Carlo type techniques even with correlation cannot begin to deal with a general failure of this magnitude, even if anyone could estimate correlations properly, which they cannot.

    It’s time to quit playing Monte Carlo and correlation math games and get down to serious cause and effect analysis.

  2. Mohan Radhakrishnan on April 5th, 2012 10:14 pm

    What is the way to read about monte carlo methods of risk management ? Would you recommend a book or some material ?

    Thanks.

  3. galorath on April 10th, 2012 12:33 pm

    Recommended book: Applied Risk Analysis: Moving Beyond Uncertainty in Business (Wiley Finance) Johnathan Mun

  4. Mohan Radhakrishnan on April 18th, 2012 12:25 am

    Amazing. Considering the fact that our PM’s use the traffic lights to quantify risk :-)

Leave a Reply

You must be logged in to post a comment.