The Risk of Statistical Risk

Author: · June 28, 2008 · Filed Under Conferences, Estimating, Risk, Thoughts  - 4 Comment(s)

I was at the SCEA (Society of Cost Estimation and Analysis) conference this week.  Some of the buzz was about risk, both talks given at the conference and the ongoing risk arguments.  For several years the risk gurus have been lining up to show how to do more robust risk analysis.   While I would not say they are getting carried away I would say i get concerned with the differences of opinion and the numerous options provided by smart people.

One of my heroes in risk, Dr. Steve Book or MCR points out that risk analysis should include correlation.

One of my other risk heroes, Evin Stump (of Galorath), points out that defining correlation properly for a work breakdown of any size can involve of thousands of correlation entries. For example a 500 element WBS has 124, 500 correlations and a 1000 element WBS has 499,000 correlations.  Dr. Book doesn’t point that out but he does say “use .2″   That solves the hundreds of thousands of correlations issue.  But according to Evin that doesn’t provide more accurate risk analysis.  Evin points out “if two or more risky items are not statistically independent, a Monte Carlo simulation that fails to account for their correlation will underestimate their combined risk”  He then asks “what if you overestimate correlation?”  Hmmm could it be that .2 correlations could overestimate some systems.  Evin also points out how difficult it is to actually determine correlation…. For example, what is the correlation between a light bulb and a light bulb on/off switch… Probably near zero but most people would be tempted to assign a high correlation.

Chris Hutchings (of Galorath), another risk guru , in his talk “Risk is not a 4 Letter Word”, does a nice job of explaining risk and why the risk used inside of SEER ‘s internal Monte Carlo analysis is sufficient.  And at least it is understandable.  And a risk guru (Air Force Risk PhD’) reviewed the SEER concept when it was first developed. The conclusion: “you could spend a lot more time on risk but the answer wouldn’t be any more accurate.  What you did is more than adequate for an estimate”

The most “interesting” talk on risk pointed out that if you run an 80% risk analysis you are really getting 90% probability and to do 80% you need to run risk at 70% probability.  Try to explain that to management.

SEER does risk internally, either with a quick approximation or with a full Monte Carlo analysis.  It also integrates with standalone risk tools so people can provide whatever risk they method they would like.  I find it somewhat amusing when some people criticize the SEER internal risk approximation… I once showed a risk guru how the SEER approximation was within a few percent of a full blown Monte Carlo.  The risk guru begrudgingly admitted this but then said it just isnt derived in the right way.  Of course computing the “right way woud take many seconds of compute time each time an einput was made, making SEER frustrating to use. As a software guy by background I am always thrilled when we can provide useful results within key performance constraints.

I think we would do well to provide more meaningful information to management and other stakeholders than to spend our time arguing about the “best” risk methods.  The cause & effect analysis suggested by Evin Stump (See Evin’s comment on this blog entry) and the risk register tie in suggested by Chris Hutchings are certainly great steps forward.

Thank you for reading “Dan on Estimating”, if you would like more information about Galorath’s estimation models, please visit our contact page, call us at +1 310 414-3222 or click a button below to ask sales questions, sign up for our free library or schedule a demo.

Comments