Homeland Security Watch

News and analysis of critical issues in homeland security

April 2, 2009

The Multiple Levels of Risk Management

Filed under: Risk Assessment — by Christopher Bellavita on April 2, 2009

Bob Ross is the Chief of the Risk Sciences Branch in the DHS Science and Technology Directorate.  He knows more about the theory and practice of risk management than anyone I’ve encountered in homeland security.  With his permission, I post his most recent contribution to a continuing exchange we have been having about risk for several years now.  (The usual caveat applies: Ross’s comments are his personal views and are not the official position of DHS or any other agency — at least not yet.)

I think we have to understand “Risk Management” at several different levels.  At the highest level, it is a philosophy of action – a high level strategy if you will – on how to deal with problems and potential problems.  At a much lower level, risk management is a relatively well-defined sequence of specific actions which collectively constitute a management and decision-making cycle.  At this lower level, certain defined analytic practices and related processes have been adopted in some sectors.  Unfortunately, these defined analytic practices have been moved, with insufficient adaptation to fit problems with fundamentally different characteristics, into Homeland Security (an issue/concept which remains inadequately defined).

You asked [here] “Is anyone in the enterprise explicitly managing risk in a way intended by the various risk management theories?”  The answer is “Yes” at the component level and “Not So Much” at the higher consolidated Departmental level.  But even at the component level, it has to be understood that the year-to-year flexibility in directing resources is very constrained.  Laws are on the books, programs are in place, etc.  Year-to-year changes are on the margins, not in the main.  Even on the margins the flexibility allowed under the budget process and the various procedural requirements imposed by Congress and OMB is fairly limited.  But that “risk” is not explicitly being comprehensively calculated and recalculated every year does not mean that it is not being managed.

Take, for instance, my alma mater, the US Coast Guard and more specifically the Coast Guard’s Marine Safety program.  The Marine Safety program has a long history.  Its history of managing risk probably predates the term “risk management” by many decades.  The program is well established and is actively and very effectively managing a wide variety of risks every day.  This assertion can be demonstrated: Despite the fact that the sea is a dangerous place, maritime accidents are actually quite rare.  The absence of daily ship collisions, fires and sinkings in large numbers does not mean that there is no risk in that sector.  To the contrary, what it means is that the existing risk management efforts in that sector are effective.  Some have looked at the absence of such accidents as evidence that whatever problems there had been in the maritime realm, they have been solved “once and for all,” and that risk management resources devoted to that sector could be redirected to other, more recently experienced (and therefore more visible) risks.  But redirecting attention from a big but managed risk to a small but unmanaged risk is far worse than being “penny wise and pound foolish.”  The real point here is that there is a vast difference between solving a problem “once and for all” and managing an enduring risk over time.  Solving a problem is a one time shot.  While specific risk management actions may evolve over time, the need to manage an enduring risk lasts as long as the risk remains.  The problem solving and risk management mindsets are very different.

Based on my experience, there are three main issues underlying most of the complaints about “risk management” in the Department of Homeland Security and in the larger national homeland security enterprise.

The first of these is that there have horribly unrealistic expectations about the kinds of information and answers that risk analysis will give us.  Too many commentators seem to be under the mistaken impression that, if we can just find the right risk assessment or magic formula, all questions will be answered and the correct path will be laid out clearly before us.  This belief is attractive to both political leaders who are looking for absolution in the event of an adverse outcome (“I did what the risk analysts, their models and their formulas told me to do.  If my decision was wrong, it’s their fault.”) and hungry consultants seeking to sell their snake-oil (“Hire me.  I have the risk assessment methodology that will answer all your questions.”).

Closely related to this first issue has been the failure of risk analysis practitioners to recognize that homeland security problems possess characteristics that are fundamentally different from the kinds of problems for which the existing body of risk analytic methodologies were developed.  Applied without necessary adaptation, these older analytic methodologies, quite simply, failed to deliver on their advocates’ promises.  Fortunately, this is being increasingly recognized in the academic and professional risk analytic communities and new methods and new ways of thinking are being developed.  One of the biggest of these analytic challenges is to develop a credible way to compare risks across multiple threats/hazards, multiple agencies and departments and multiple levels of government, as well as across the public-private divide.  And, of course, we are trying to address this analytic problem while not yet having a clear understanding of what the term “homeland security” really means.

The final main issue is, as discussed earlier, very unrealistic expectations about the impact that risk analyses will have on year-to-year allocation and reallocation of resources across various programs.  The very last thing we need is an annual risk-based budget exercise in which the entire budget is reviewed.  Jimmy Carter tried something like that with his so-called Zero Base Budgeting and it was a total failure.  Far too much effort was put into generating support for “decisions” where the decision-maker had neither the time to consider all the details nor the latitude to act on anything other than the margins.  We don’t need to spend millions in analysis to support a decision over hundreds of thousands of dollars.  We also don’t need to spend millions to generate decision-support information where there is no latitude given to the decision-maker.

–  Robert G. Ross

Share and Enjoy:
  • Digg
  • Reddit
  • Facebook
  • Yahoo! Buzz
  • Google Bookmarks
  • email
  • Print
  • LinkedIn

2 Comments »

Comment by William R. Cumming

April 2, 2009 @ 5:20 pm

I would argue that the Coasties failed to risk manage their largest risk, specifically the “Deepwater Program.” Without completion and successful implementation the Coasties will be being asked to do things like break ice in the Arctic with bare hands. Not very effective.

Pingback by Homeland Security Watch » Other views from PPD 8: planning scenarios, capabilities and outputs

April 13, 2011 @ 11:09 am

[…] I am reprinting (below) what they wrote.  The first was written by Bob Ross, the Chief of the Risk Sciences Branch in DHS.  Please note, his views are his own and are not meant to reflect the views of DHS or anyone else other than Bob.  Homeland Security Watch has published his work before (see this link). […]

RSS feed for comments on this post. TrackBack URI

Leave a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>