Homeland Security Watch

News and analysis of critical issues in homeland security

May 12, 2010

The Big Ask

Filed under: General Homeland Security,Intelligence and Info-Sharing,Technology for HLS — by Mark Chubb on May 12, 2010

Tomorrow afternoon, I am scheduled to participate in a panel discussion on crisis management and technology at Portland State University’s Mark O. Hatfield School of Government. The event, sponsored by the campus chapter of Pi Sigma Alpha — the political science honor society, asks what role technology can or should play in helping us respond to 21st century crises.

The organizers tell me their focus remains squarely on crisis management not technology. The question in their minds is not whether technology has a place in managing crises, but how we should define that place. How, they wonder, will we know whether or not technology is helping us? From a practitioner’s perspective, this struck me as a very good question, and one that does not get asked often enough.

From where I sit, crisis management succeeds or fails on how well leaders manage its four phases, which I define as:

  • Awareness
  • Ambiguity
  • Adaptation
  • Accountability

Awareness involves signal detection, which in turn depends upon the salience of signals to those responsible for detecting and responding to them. Technology can improve signal to noise ratios, but may dull the sense of salience as people become overwhelmed by inputs, especially if those responsible for designing or operating the system lack contextual intelligence (see Nye 2008).

Ambiguity not uncertainty is the dominant feature of complex systems and their relationships with their environments, and no more so than in when these systems are in crisis. Successful decision-making in crisis situations depends not so much on the ability to gather information or even to organize it as it does on seeing the meaning or patterns hidden within it. Humans remain far better at reconciling the relevance of inconsistent, incomplete, competing, and even conflicting information than cybersystems. Ensuring such systems support the strengths of the people responsible for making decisions rather than using them to overcome weaknesses seems to me an essential step in preventing these systems from compounding rather than correcting our problems.

Most crises are adaptive not technical challenges (Heifetz & Laurie 2001). Although many crises present us with problems that require technological assistance, their hallmark remains the need to see our relationship with the problem and its environment differently from the way we did before our situation became apparent. Dietrich Dörner (1997) demonstrated that most of our problems managing adaptive challenges arises not from their scope or scale so much as our inability to see them as complex webs of interdependent variables that interact in subtle but important ways. His experiments demonstrate that we are particularly ill-equipped to manage situations in which these interactions produce exponential rather than quasi-steady changes in the situation. He further concludes, that when confronted with such problems, we have an altogether too predictable tendency to direct out attention in ways that are either too narrow and fixed or too broad and fleeting to do much good. Adaptive challenges, then, require us to keep the big picture in perspective and to engage others in its management. This is not something that cybersystems necessarily help us do better, as they engage people with a representation of the problem not its essential elements.

In the end, every crisis demands an accounting of what went wrong, and, if we are truly honest and maybe a bit lucky, what went right as well. Such judgments are as inherently subjective just as their conclusions are (or should be) intensely personal. Getting people to accept responsibility, learn from their experiences, and take steps to strengthen the relationships they depend upon to resolve crises is an innately human process. Cybersystems may help us engage one another over great distances in real time and keep records of our interactions, but they do not necessarily clarify our intentions or make it any easier for us to acknowledge the hard lessons we must learn if we are to grow.

Despite my concerns, I remain optimistic that technology can help us improve the effectiveness if not the efficiency of crisis interventions. But only if we do not ask too much of it or too little of ourselves along the way.

References:

DÖRNER, D. (1996). The Logic of Failure. New York: Basic Books.

HEIFETZ, RA & LAURIE, DL (2001). The Work of Leadership. Harvard Business Review. Cambridge, Mass.

NYE, Jr., JS (2008). The Powers to Lead. New York: Oxford University Press.

Share and Enjoy:
  • Digg
  • Reddit
  • Facebook
  • Yahoo! Buzz
  • Google Bookmarks
  • email
  • Print
  • LinkedIn

2 Comments »

Comment by William R. Cumming

May 12, 2010 @ 10:22 am

This is a helpful post by Mark as always. I think modern crisis management certainly is linked to technology and each crisis manager needs to know the status of the crisis decision making support systems that will be available. For example, GIS and interactive interoperable communiciations–with whom and for what.

Comment by Philip J. Palin

May 12, 2010 @ 5:55 pm

Simulation and modeling of crises is an aspect of technology that I have seen being very helpful. When the simulation provides a low-risk, but very realistic engagement with complexity it can allow participants to recognize the value of the adaptability Mark outlines. In some cases I have seen participants return again and again to the simulation in order to explore and fine-tune their adaptive skills.

RSS feed for comments on this post. TrackBack URI

Leave a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>