Homeland Security Watch

News and analysis of critical issues in homeland security

November 4, 2012

Supply and Demand in Disasters

Above: Truck rack for loading product to tanker truck

The fuel crisis in New York City, Westchester County, Long Island, northern New Jersey, and nearby is important.  Obviously it is important to the residents of these areas.  Less obviously, it is important to those of us who are involved in homeland security policy and strategy.

I have continued to aggregate fuel-related stories to the Friday post below.

In Sandy’s wake supply has not met demand.  Not unreasonably, policy makers and strategists have viewed this as a lack of supply.  Significant steps have been taken to increase supply.   Senator Schumer pushed the US Coast Guard to reopen the ports of New York and New Jersey to fuel deliveries.  Secretary Napolitano waived the Jones Act which allows foreign shipping to deliver fuel into the ports.  President Obama ordered the military to deliver fuel into the hardest hit areas.

All of these steps have increased supply to the mid-Atlantic and served to suppress price increases.   Many far removed from the New York metro area are benefiting from gasoline price reductions related to these steps to increase supply.  It has been a vigorous response.

It is not, however, targeted at the present problem.  Supply itself was never the problem. There are two fundamental problems:

The fuel distribution terminals have been damaged and have not had electricity. South and east of Newark Airport and just west and north of Staten Island is a handful of places where pipelines and tankers deliver gasoline (Google Map).  All of these venues lost power.  None of these venues were on the utility’s priority restoration lists.  The utility — and most policy-makers and strategists — did not know the role nor even the existence of these places.   This is where tanker trucks pull into truck racks and gasoline is pumped from storage tanks and blended into tanker trucks which then proceed to various gas stations.   There has been no electricity to operate the truck racks and that’s a fundamental problem.  There are other problems with debris removal, personnel,  damage to the storage tanks, and communications as to which gas stations have power, but these problems have not been the most serious impediments.

Two-thirds (or more) of gas stations have not had electricity to run their pumps and otherwise transact business. Many gas stations  have plenty of gasoline, but do not have electricity to pump that gas.   Why, you might ask, do gas stations not have back-up generators to pump their gas?  This is required in Florida and, maybe (?), Louisiana.  It has been successfully resisted in most other jurisdictions partly because  it would further diminish the number of independent operators and enhance the market dominance of chains.   Most gas stations would lose money on gasoline sales alone and make their (very small) profits on selling salty and sugary snacks, soda pop, beer, and cigarettes.  The capital and personnel requirements for purchasing and safely maintaining a generator for conducting sustainable commerce — not just pumping gas — are significant especially for the smaller independent operator.

There are a range of policy and strategy options to address these fundamental problems.  In the next two weeks is the right time for New Jersey, New York, Connecticut, and others to actively and inclusively consider these options.

It is also my impression — but I don’t have sufficient evidence to prove — that from Tuesday morning to Thursday afternoon/evening, these fundamentals were not being communicated to Governors Christie and Cuomo, Mayor Bloomberg, and other senior policy makers and strategists.  As a result, considerable energy, time, and effort were being expended on measures that were peripheral to the current problem and may have distracted from resolving the truck rack problem identified above.  This, too, is an issue worth considering while memories are fresh and more accurate after-action outcomes can be specified.

To be explicit:  There is absolutely no evidence of anyone being negligent or passive (quite the contrary).  There is evidence that a crisis, as usual, has exposed aspects of reality that now deserve sustained and thoughtful attention.

August 28, 2012

Managing the Insider Threat: a book review

Filed under: Infrastructure Protection,Private Sector — by Christopher Bellavita on August 28, 2012

Today’s post was written by Nadav Morag. Morag is a faculty member at the Naval Postgraduate School’s Center for Homeland Defense and Security.

Managing the Insider Threat: No Dark Corners — a book by Nick Catrantzos (who sometimes writes for Homeland Security Watch) — is a welcome contribution to the study of insider threats: the dangers posed by individuals who have legitimate entrée to trusted information and access to systems within institutions or infrastructures.

According to a study carried out by CISCO, 39 percent of IT professionals surveyed were more concerned about insider threats than about external hackers. Disgruntled employees, those recruited by outsiders or those who purposefully infiltrate an organization, pose a serious threat to companies, the economy and national infrastructures.

Catrantzos’s book fills an important niche in bringing together the various aspects of this phenomenon in a way that others have not previously done. While studies exist that focus on aspects of the phenomenon: such as the mindset and motivations of individuals who become insider threats or those that focus on technical solutions to enhance information security, prior to the publication of Managing the Insider Threat, the field lacked a comprehensive tome that addressed all aspects of the issue.

Happily, Catrantzos has rectified this problem and his work looks not only at new research into the insider threat phenomenon but also at the key players that impact the degree to which this problem can be mitigated or, failing that, managed. In addition, Catrantzos looks at best practices in the area of background investigations, detecting deception and the legal tools and pitfalls involved in coping with insider threats. Finally, the book looks at categories of insider threats, from existential ones to those that can lead to individual workplace violence or individual acts of embezzlement. The book also includes, in the appendices, some very interesting findings from a Delphi survey of managers on the insider threat issue and their respective perceptions of it.

In addition to providing a very comprehensive and inclusive overview of the different facets of the problem, Managing the Insider Threat also provides very practical recommendations for mitigating the various facets of the insider threat phenomenon. From questions for online and classroom discussion (with an answer guide) to exercises for group projects to checklists for managers trying to gauge and cope with threats, Catrantzos has created a volume that will be incredibly useful for students studying the problem, and to managers and consultants requiring a strategy and specific policies to cope with this increasingly destructive phenomenon.

Managing the Insider Threat: No Dark Corners is a book that is just as academically relevant as it is practitioner-relevant. The book is superbly organized, clearly written and provides excellent analysis, while also being very readable.

August 16, 2012

Near-misses, mitigation, and resilience

Filed under: Catastrophes,Infrastructure Protection,Preparedness and Response,Risk Assessment — by Philip J. Palin on August 16, 2012

A giant tulip poplar fell in our yard.   It’s girth was nearly twice my reach.  A storm uprooted and deposited it precisely parallel to our house about eight feet from the west wall.  If it had fallen east at almost any other angle it would have caused significant damage.

This happened two years ago. There are several smaller trees as close to our house.  There is one even larger oak towering over the northwest corner. I have done nothing to mitigate the risk.

There is a program at Wharton that specializes in near-misses.  In 2008  the Wharton researchers added two new layers to the bottom of a pre-existing Safety Pyramid and renamed it the “Risk Pyramid.”  The two new layers are:

  1. Foreshadowing Events and Observations.
  2. Positive Illusions, Unsafe Conditions and Unobserved Problems – Unawareness, Ignorance, Complacency

(From  Assessment of Catastrophic Risk  and Potential Losses in Industry (2012) Kleindorfer, Oktum, Pariyani, and  Seider)

I am not unaware or ignorant of the risk.  I have observed the risk.  I don’t hold positive illusions regarding the risk.   I have observed near-misses and I recognize them as foreshadowing events.  But I am complacent.

Why am I complacent?

According to Alan Berger et al there are  “Five Neglects” common in risk management:

1. Probability neglect – people sometimes don’t consider the probability of the occurrence of an outcome, but focus on the consequences only.

2. Consequence neglect – just like probability neglect, sometimes individuals neglect the magnitude of outcomes.

3. Statistical neglect – instead of subjectively assessing small probabilities and continuously updating them, people choose to use rules-of-thumb (if any heuristics), which can introduce systematic biases in their decisions.

4. Solution neglect – choosing an optimal solution is not possible when one fails to consider all of the solutions.

5. External risk neglect – in making decisions, individuals or groups often consider the cost/benefits of decisions only for themselves, without including externalities, sometimes leading to significant negative outcomes for others.

Some of these factors influence my complacency — especially consequence neglect — but my inaction is mostly a matter of avoiding near-term costs.   It will certainly cost me money, time, and several beautiful trees (all current sources of enjoyment) in order to mitigate the uncertain, if very likely, future loss of (more) money, time and one or more fallen trees.

To overcome these neglects and short-term thinking, scholars at the Wharton School of Business have identified an eight step process:

Step 1 Identification and recognition of a near-miss

Step 2 Disclosure (reporting) of the identifiedinformation/incident

Step 3 Prioritization and classification of information for future actions

Step 4 Distribution of the information to proper channels

Step 5 Analyzing causes of the problem

Step 6 Identifying solutions (remedial actions)

Step 7 Dissemination of actions to the implementers and general information to a broader group for their knowledge

Step 8 Resolution of all open actions and review of system checks and balances

I have done everything except Steps 3, 7 and  8.  In other words, I have done everything except make an explicit decision regarding priority and implementation.  I am kicking the can.  I am procrastinating.  I am not actively choosing, I am passively choosing to accept the consequences of inaction.

This is not just a personal problem.  This is at the core of many organizational, even national problems; even in the best organizations, even in the best nations.

Embedded in the links above are entirely reasonable recommendations regarding management processes to overcome this recurring problem.   Mostly it comes down to variations on creative nagging.  We use data to nag, processes to nag, required reporting to nag. We schedule meetings mostly as an elaborate way to nag. Laws and regulations nag… and throw in some threats for good measure.  By writing this blog I’m nagging myself to take action.

As a colleague says, “Humans typically talk and talk and talk, and if they keep talking about something long enough they will actually do something about it.”

Resilience is enhanced by taking personal responsibility for recognizing and mitigating risks.   Resilience is reduced by inattention, denial, lack of communication, and inaction.   Ignoring near-misses increases the likelihood — and often the scope — of future loss.

What about other near-misses:  floods, wildfires, earthquakes, power outages, communications failures, supply chain complications, and more?  When are these stress events one-offs and when are they pieces of a pattern?   When does an infrequent risk deserve sustained attention and action?

How about this:  When a key asset (such as your home) is catastrophically vulnerable to a demonstrably recurring event (such as high winds)  and this vulnerability is amplified by a specific threat (such as a giant tree), action should be taken to reduce potential consequences (take down the tree).

Excuse me, I’ve got some calls to make.  How about you?

June 19, 2012

Consequence Management for Critical Infrastructure Using an Environmental Threat Model

Filed under: Infrastructure Protection — by Christopher Bellavita on June 19, 2012

Today’s guest writer is Steve Kral, the Homeland Security Government Relations Officer for the Washington Metropolitan Area Transit Authority (WMATA)

The usual caveats apply: Steve’s opinions are his own. Please do not assume they reflect the views of WMATA or any other agency.

————————

An enduring problem facing the Department of Homeland Security (DHS) is the lack of a universally accepted and transparent scientific model for determining priorities among critical infrastructure vulnerabilities. DHS may want to review history and examine the Environmental Protection Agency’s (EPA) prioritization model used to rank the relative threat of actual and potential release(s) of hazardous substances from a site. EPA’s evaluation criteria, based upon relative risk or danger to public health or welfare of the environment, may afford insight into developing an acceptable critical infrastructure prioritization model. Such a model may also be beneficial in responding to Congress’ concerns about homeland security expenditures.

The lack of environmental oversight and enforcement regulations, led to the creation of thousands of hazardous waste sites throughout the United States prior to the 1970s. On December 11, 1980 the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), commonly known as Superfund, was enacted by Congress in response to the numerous threats of hazardous waste sites in the United States, typified by the Love Canal in New York, and the Valley of the Drums in Kentucky.

Section 105(8)(A) of CERCLA required EPA to establish criteria for determining priorities among releases or threatened releases (of hazardous substances) throughout the United States for the purpose of taking remedial action and, to the extent practical, taking into account the potential urgency of such action.

To meet this requirement and help set priorities, EPA adopted the Hazard Ranking System (HRS), a scoring system used to assess the relative threat associated with the actual and potential releases of hazardous substances from a site. The HRS was designed to be applied uniformly to each site, enabling sites to be evaluated relative to each other with respect to actual or potential hazards. As EPA explained when it adopted the system, “the HRS is a means for applying uniform technical judgment regarding the potential hazards presented by a facility relative to other facilities.”

Could DHS take advantage of the successes of the HRS and build an acceptable model for evaluating critical infrastructure? I think the answer is yes.

The likelihood of a hazardous substance being released, the quantity of a substance, and the population effected are all examples of known scientific factors the HRS scoring system uses to calculate the threat. Unlike hazardous substances, the threat of a terrorist attack is extremely dynamic and ever changing. DHS may be focused on liquids on planes today, and tomorrow it could be a vehicle improvised explosive device somewhere. Factors for calculating threat are often unknown and based on intelligence, not science. Where EPA assesses the “threat” associated with hazardous substances of a site based on scientific factors, homeland security professionals may want to evaluate the consequences a critical infrastructure facility poses to an urban area or state if the facility were to be lost or compromised.

The success of the HRS lies not only with the scientific analysis used to determine the known or potential threat a site poses to human health and the environment, but also with the transparency associated with the evaluation of a site. All sites evaluated using the HRS are listed within the Federal Register and open for public comment, allowing the general public access to all data used to score the site.

DHS may want to consider a similar transparent process with the evaluation of critical infrastructure facilities.

David J. Kaufman and Robert Bach discussed the concept of transparency in their paper, A Social Infrastructure for Hometown Security: Advancing the Homeland Security Paradigm. They reflect on how the United Kingdom conducts and share a risk assessment annually, combining national, regional, and local results. It publishes a National Risk Register designed to encourage public debate on security and help organizations, individuals, families, and communities, who want to do so, to prepare for emergencies.

A similar transparent process for assessing critical infrastructure facilities may allow DHS to gain the public’s confidence with the evaluation and prioritization of sites within the United States. The public would become more aware of the critical infrastructure within their communities and may be more willing to contact law enforcement if they see anything suspicious.

Some people might argue that DHS is currently performing such evaluations. Unfortunately, DHS requests individuals at the state to prioritize their own critical infrastructure, using broad categories. The logic behind such categories has never been fully explained. DHS may want to establish oversight and enforcement regulations based on a consequence management formula focused on protecting the citizens of the United States rather than trying to calculate the risk of a terrorist attack occurring.

An article entitled, Changing Homeland Security: In 2010, was Homeland Security Useful? asserts, “If homeland security is to become a useful academic and professional discipline, it has to demonstrate how looking at enduring problems through a homeland security framework adds significant value not provided by other disciplines.”

Developing a scientifically acceptable model for prioritizing critical infrastructure by evaluating the consequences associated with such sites may help homeland security become more useful as an academic and professional discipline. A sound model could be used within the urban planning discipline in developing more resilient communities, or the insurance industry in determining insurance rates for critical infrastructure facilities.

February 8, 2012

The fragility of the electrical grid

Filed under: General Homeland Security,Infrastructure Protection — by Arnold Bogis on February 8, 2012

This past weekend, the Boston Globe ran a great article by Neil Swidey that begins with a narrative about the late fall snowstorm that caused massive power outages in the Northeast and pivots into an investigative piece examining the fragility of the entire electrical grid.

First, however, comes the personalization:

While most of that northwest-of-Boston community – like much of the region – remained in the dark, the Sargent Memorial Library had been welcoming the biggest crowds of Strapko’s decade-long tenure. That’s mainly because restoring power to key town facilities like the fire and police departments had also turned it back on at the nearby library.

When they returned to the library to retrieve her car, it was about 10 o’clock. Strapko was astonished to see that there were still half a dozen cars, sport utility vehicles and Priuses alike, idling in the parking lot, the drivers’ faces lit by the bluish glow of their laptop and smartphone screens. She later learned that all week long, people had been lingering in the parking lot into the early hours of the morning, unwilling to disconnect from the library’s 24-hour Wi-Fi lifeline. A dozen years into the new century, this is how hopelessly reliant we’ve all become on power.

Swidey builds the case that major disruptions in electricity delivery have been few and far between in the recent American experience:

Yet here in this country, we’ve come to expect that whenever we flip the switch or plug in to the outlet, the juice will be there. The power grid has been so reliable over the years that most of us can count on one hand the number of times in our lives when we’ve been without electricity for any significant stretch.

However, the last large non-storm related blackout in 2003 can be seen as a harbinger of future fragility.

If our society is more reliant on power than at any time in history – without it, we’ve got no commerce, no communications, no clean water – and if power becomes less reliable in the future, the big question is: Will we be able to hack it?

He divides particular threats to the grid into three buckets:

Bucket No. 1 involves what the insurance-policy fine print calls “Acts of God.’’ Here we’re talking about all those “storms of the century’’ that seem to be arriving with unsettling frequency.

And as the Halloween storm showed, even people in neighborhoods with underground power lines won’t necessarily escape outages, because those lines are fed somewhere along the route by aboveground equipment. What’s more, Mother Nature can hit us with a lot more than just high winds and heavy snow. Consider the solar storm.

Let’s call Bucket No. 2 “Acts of Terrorists.’’ Among these, there’s the old-fashioned physical attack on the bulk power system, either at its source of generation or somewhere along its transmission route. There’s the newfangled cyber attack on the computers controlling our interconnected grid. And then there’s the otherworldly-sounding attack by an electromagnetic pulse, or EMP, weapon.

Yeah, he went there: EMP.  Luckily, he expends some ink on painting some of the stalwarts of that threat genre as a little extreme and mentions the arguments that paint this as an unlikely event. As a solar storm impacting the Earth is a “when, not if” event, taking the steps to harden the grid that many EMP enthusiasts suggest would seem prudent to me. Trying to build a fanciful missile defense system that would stop any attack conceivable…not so much. Getting back to man-made threats:

But the chairman of the task force, Granger Morgan, says that what continues to worry him the most is the havoc that bad guys could cause with relatively little technological savvy. “If I’m a terrorist, I can shut down the power system in a lot simpler ways than using a valuable nuclear device,’’ says Morgan, an engineering professor at Carnegie Mellon University and a noted authority on the grid.

Natural and intentional man-made threats to the electrical grid are fairly well known in homeland security circles, but the article brings up several structural facets of the system that at least I hadn’t considered before:

Finally, Bucket No. 3 is the “Ailing Grid’’ itself. In many places, the infrastructure is as old and stooped as a pensioner. As it is upgraded and its capacity is expanded, our rapacious need for more electrical power races to max it out once again.

As our electrical thirst grows, the choices we make today about how to quench it will have lasting consequences.  Not simply some combination of environmental and national security concerns, decisions about fuel type and infrastructure capacity will have unforeseen impacts.

A decade ago, 22 percent of New England’s electrical power came from oil-fired plants and 15 percent came from natural gas-fired ones. Today, about half of our electrical power comes from natural gas, while a fraction of 1 percent comes from oil. And our reliance on natural gas promises to grow even more significantly in coming years.

Second, the natural gas pipelines feeding this region were built to serve our heating – not our electrical – needs. Most of the year, there’s sufficient room in the pipeline to supply both. The danger zone, however, comes when the temperature plummets. During stretches of brutal cold, the pipeline capacity can be quickly used up by the natural gas needed to heat our homes and businesses. And unlike oil and coal, natural gas supplies cannot be easily stored in large quantities.

Yet, some important limitations tend to get lost when people rhapsodize about renewables. Although wind and solar power represent a wonderfully clean source of electricity, in energy parlance, they are not particularly “dispatchable.’’ If the weather doesn’t cooperate, you can’t meet increased demand by simply turning up the power spigot and having renewable energy flow out the tap.

As the price of natural gas continues to either hold steady or decrease due to increasing supplies, new oil and to a lesser extent coal (which releases more pollutants into the air than natural gas) burning plants are not likely to get built.  Older plants will  be decommissioned, renewables may not be dependable, and the nuclear renaissance may run aground on the shore of economics (and a bit of safety post-Fukushima–but impacted to a greater degree by the comparative costs of building a new natural gas vs. nuclear plant). Infrastructure concerns will vary with conditions across the regions of this nation, but so will the long term impacts of deciding what sort of system to build.

With current political pressure to cut government spending, the issue of whether it will get built is just as important as what to build. If that isn’t daunting enough, the provision of electricity is a complicated public-private partnership. The issues raised by Phil’s recent posts on supply chain issues apply to the grid as well.

The answer?

Some will choose to get off the grid, or at least decrease their reliance:

FOR A MORE ENCOURAGING GLIMPSE into the future, head up to East Dummerston in southeastern Vermont. There, on 27 acres, Juliet Cuming and David Shaw live with their two children in a beautiful 2,400-square-foot house and run a photo-archive business in a building next door. Here’s a partial list of what you’ll find inside: flat-screen plasma TV, three laptop and two desktop computers, an Xbox, scanner, washer and dryer, dishwasher, toaster, and vacuum. Here’s what you won’t find: a bill from the electric company.

They have lived fully off the grid for 16 years now, producing all the energy they consume, relying largely on a wind turbine and a bunch of solar panels. They estimate that it cost them an extra $20,000 to have their home built so it could be a self-sufficient island of energy, and figure they have already recouped that investment.

The entire article is well worth reading:

http://www.boston.com/lifestyle/articles/2012/02/05/what_if_the_lights_go_out/?page=full

Postscript:For good resilience measure, the article includes the standard list of items one should have on hand for blackouts and other types of emergencies.  Strike while the iron is hot, or at least when the reader is concerned.

Post-postscript: For those interested in getting into the weeds of electricity policy, the “Harvard Electricity Policy Group” has been examining these issues since 1993 and allows public access to their extensive research library.

Supply chain testimony

Yesterday several DHS officials and others were on the Hill giving testimony related to the new National Strategy for Global Supply Chain Security.  Please see: http://homeland.house.gov/hearing/subcommittee-hearing-balancing-maritime-security-and-trade-facilitation-protecting-our-ports

Three quick impressions:

1. Constructive example of “stovepipes” being brought together around a supposedly stovepipe-busting strategy.

2. The tension between security and resilience is real, persistent, and difficult to effectively engage.   Security is tough enough.  Resilience requires even more creativity.

3. It is striking to have a hearing on this topic without hearing directly from the private sector as well.

This is an early step in rolling-out the new strategy.  Much more to come.

February 3, 2012

Risk is often in the eye of the beholder

Filed under: Catastrophes,Infrastructure Protection,Port and Maritime Security,Strategy — by Philip J. Palin on February 3, 2012

Although we can say with near certainty that new outbreaks of disease and catastrophic natural disasters will occur during the next several years, we cannot predict their timing, locations, causes, or severity.  We assess the international community needs to improve surveillance, early warning, and response capabilities for these events, and, by doing so, will enhance its ability to respond to manmade disasters.

James R. Clapper
Director National Intelligence
Testimony, January 31, 2012

The intelligence chief’s comments regarding the Iranian threat were considerably more circumspect, “We assess Iran is keeping open the option to develop nuclear weapons, in part by developing various nuclear capabilities that better position it to produce such weapons, should it choose to do so.  We do not know, however, if Iran will eventually decide to build nuclear weapons.”

Yet Senators, the media, and perhaps General Clapper himself gave much more attention to the possible Iranian threat than the probable threat of natural catastrophe and pandemic.  The front page headline in the Washington Post was “U.S. spy agencies see new Iran risk.”

The same day the DNI was testifying on Capitol Hill, Mike Dunaway was making a presentation to a FEMA-hosted audience in Harrisburg, Pennsylvania.   In late 2008 and early 2009 a reasonable sample of  respondents answered a series of questions regarding their perceptions of relative threats to continuity of private sector operations, profitability or survival.

A couple of the survey findings stood out for me: Among 19 threats identified, the lowest perceived threat was “geologic disaster (earthquake, mudslide, volcanic action)”.  The survey was conducted prior to the earthquake-and-tsunami in Japan and none of the respondents were in California.   Perceptions will vary by time and place.

Also low on the list of threats was “interruption in supply or delivery chain.”   Several firms reeling from the loss of Japanese and Thai suppliers might answer differently.  But I don’t doubt the survey findings reflect general attitudes.  (Dr. Dunaway’s dissertation is chock-full of interesting findings.)

As addressed in two posts last Thursday and Friday, the President has signed-out a National Strategy for Global Supply Chain Security.  I appreciate Alan Wolfe and Bill Cumming commenting here on the posts.  Most friends, colleagues, and perhaps an adversary or two, decided to communicate more privately.  Below are a sample of the comments received.

“Just words on paper, very unlikely to really influence supply chain policy.”

“Despite a bow to resilience, this is a security strategy.”

“Lots of cargo and logistics talk, not much recognition of how the supply chain is really something new and different.”

“Though better than the earlier draft, it still seems to be mostly focused on security and less on resilience. However, I know from direct experience it is not easy to write about resiliency, and perhaps being secure is one of the first parts of being resilient.”

“Stalking horse for new (costly) regulations.”

“While it is a national strategy, it feels quite federal/global to me. I’m not sure if many state and/or local folks could conceive how they could contribute to helping realize the goals outlined. It is my belief that a resilient supply chain, like many things, starts and ends in localities around the world.

“C-suites will ignore and deploy their minions to be sure “efficiency” always trumps “resilience,” no matter how inefficient it may be to have a catastrophic collapse of supply chains.”

“The private sector is paramount. It seems to me that much, though certainly not all, of the role of government will be to encourage, support, oversee and in some instances force the private sector to do things. Left to themselves, I think other forces will drive the private sector to not do some of what has to be done to reduce risk and enhance resiliency.”

“To give this the status of a presidential strategy is sort of amazing. It’s made me stop to think. But I feel a bit like a Catholic must feel when it’s announced the Pope has convened a major meeting on an aspect of doctrine I had really never thought of before.”

“What am I supposed to do? I don’t know enough about supply chains to even start a conversation with private sector peers. Besides which private sector peers? These are not the security and EM guys I usually work with.”

“(The strategy is) better than I would have bet. But while behind closed-doors the operators agree it is a real issue, how do you convince CEOs, CFOs, and Boards of Directors? Japan didn’t persuade. Thailand didn’t persuade. White House stationary is easy to ignore. The only things these masters-of-the-universe understand is a swift kick in you know where… and by then it will be too late.”

Perceptions will vary by time and place.  But there is a strong tendency to give more attention to external threats than internal vulnerabilities.  There is more concern regarding possible evil intent elsewhere than accident, neglect, and denial close at hand. We see the splinter in the eye of the other much more quickly than we recognize the log in our own eye.

December 30, 2011

Fukushima: soteigai or zatzusei

Monday the independent panel appointed to investigate the Fukushima nuclear accident released a 507 page interim report.  Most of the document focuses on specific operational decisions and tactical choices.

Several specific failures are highlighted: insufficient planning, poor regulation and oversight, inadequate training and exercising, a breakdown in communications within the government and between the government and the operator of the nuclear power plant.

The previous paragraph could be quickly edited to apply to nearly every serious industrial accident: Bhopal, TMI, Deepwater Horizon, various large-scale blackouts and others.   The same failures are referenced in most after-actions for events large and small.

Also typical has been most of the media coverage focusing on personal failures by political, regulatory and corporate leaders.

But toward the end of the report — and the 22 page English-language executive summary — are several atypical bits of analysis worth much more attention than given so far.

It is not easy to admit an absolute safety never exists and to learn to live with risks.  But it is necessary to make effort toward realizing a society where risk information is shared and people are allowed to make reasonable choices.

A quarter century ago I made some extra Yen editing Japanese-to-English translations.  This time I will mostly leave the first draft as it is. There is a kind of clarity in the slightly awkward but more literal rendering.

Even for an accident of low probabilities so long as extremely large scale damages are anticipated once it occurs… due consideration should be given to the risks involved and precautionary measures should be taken.

It was a major shortcoming for the safety of both nuclear power plants and surrounding communities that a nuclear accident had not been assumed to occur as a complex disaster.  Disaster prevention programs should be formulated by assuming complex disasters, which will be the major point in reviewing nuclear power plant safety for the future.

It cannot be denied that the viewpoint of looking at a whole picture of an accident was not adequately reflected in nuclear disaster prevention programs in the past.

The nuclear disaster prevention program had serious shortfalls. It cannot be excused that nuclear accidents could not be managed because of an extraordinary situation that… exceeded the assumption.

The Investigation Committee is convinced of the need of paradigm shift in the basic principles of disaster prevention programs for such a huge system, which may result in serious damage once it has an accident.

Whatever to plan, design and execute, nothing can be done without setting assumptions. At the same time, however, it must be recognized that things beyond assumptions may take place. The accidents this time present us crucial lessons on how we should be prepared for such incidents beyond assumptions.

Low probability, high consequence events deserve our sustained attention.

Reasonable assumptions will be exceeded.

The chairman of the investigation panel, Yotaro Hatamura, has been especially critical of the tendency to blame the crisis on soteigai. This is often translated as “unforseeable events,” but is probably closer to “unimaginable events.”  (Echoes of a “failure of imagination” in the 911 Commission report.)

Hatamura is an engineer.  His best known work is probably Learning from Design Failures in which he examines more than 100 cases to “uncover the root cause, reveal the scenario that led to the unwanted event, describe what happened so readers can clearly repeat the steps in their mind, and propose ways to avoid those mistakes in the future.”   It is a very detailed, case-by-case, engineering oriented approach to disciplined thinking.  He is a solution-oriented guy.

But Hatamura  has also become an advocate for clearly distinguishing between complexity and non-complexity and what can — and, even more important, cannot — be done to manage complexity.  With a little effort we can foresee complex events.  We have a much more difficult time imagining how our strategy for the complex must differ from our strategy for the merely complicated or novel or known.

The Japanese for complexity (see above) includes kanji a classically minded literalist might read as “a surprising recurrence of miscellaneous elephants.”  If you can imagine how you would manage that, you are on your way to being able to manage the cascade of a complex event.

The final report is expected in June.

November 22, 2011

Vandalism is stupid and silly, like “connecting interfaces to your SCADA machinery to the Internet.”

Filed under: Cybersecurity,Infrastructure Protection — by Christopher Bellavita on November 22, 2011

Water System Hack – The System Is Broken

Hackers ‘hit’ US water treatment systems

Homeland Security investigates possible terrorism in Springfield

Water system may be cyber attack victim

Has stuxnet come to our critical infrastructure shores?  Is it duqu?  Could it be something even worse?

“DHS and the FBI are gathering facts surrounding the report of a water pump failure in Springfield Illinois.  At this time there is no credible corroborated data that indicates a risk to critical infrastructure entities or a threat to public safety,” DHS spokesman Peter Boogaard explains.

“I dislike, immensely, how the DHS tend to downplay how absolutely FUCKED the state of national infrastructure is” responds someone named “prOf” in a pastebin post that includes, according to pr0f, images of another water system that was hacked.

“I’m not going to expose the details of the box,” prOf promises. “No damage was done to any of the machinery; I don’t really like mindless vandalism. It’s stupid and silly. On the other hand, so is connecting interfaces to your SCADA machinery to the Internet. I wouldn’t even call this a hack, either, just to say. This required almost no skill and could be reproduced by a two year old with a basic knowledge of Simatic.”

————————–

Nick Catrantzos, who has written for Homeland Security Watch in the past, is an adjunct professor of Homeland Security and Emergency Management.  More relevant to today’s post, Nick is the former security director for a regional water utility.  Here are his thoughts on the most recent cyber event.

Spotting the Incidental Cyber Saboteur

You need not be evil to be wrong, and the true Achilles’ Heel of recent news about cyber attacks to water infrastructure in the Chicago area (details at http://www.cnn.com/2011/11/18/us/cyber-attack-investigation/index.html?iref=allsearch) is not foreign hackers of SCADA, the supervisory control and data acquisition system that makes it possible to turn a valve by remote control. Hackers have been a known external threat since the personal computer became widespread. Thus, makers of computer- and network-dependent tools like SCADA systems have to offer some protections against hackers just to make their systems marketable.

Why is no one therefore consulting other than self-avowed cyber security experts who are now issuing dire warnings about offshore SCADA hackers who may or may not be Russians? (The may-not possibility arises when these experts point out that clever hackers have the ability to misrepresent the origin of their attacks.). The same hand-wringing experts – or their fellow travelers – belong to the camp that opens the door to this vulnerability in the first place. They are not evil, just wrong.

Remote Access as Double-Edged Sword

Consider: Even the technologically challenged security professional sees the vulnerability to enabling remote access to critical systems, like water infrastructure. How do purveyors of such systems see remote access when marketing to fellow cyber aficionados? It is a selling feature, of course. Why, with remote access, the technician fielding a panic troubleshooting call at midnight can diagnose and solve the problem in pajamas instead of in the field. And the field, when it comes to water infrastructure, often turns out to be at distant sites over bad roads, poor lighting, and unattractive traveling conditions. Solving the problem from home is a win-win for all concerned, since it saves down time, isn’t it? Not if this debate includes security professionals charged with looking at the bigger picture of enterprise-wide vulnerabilities.

What makes it possible for these infrastructure attacks to abuse SCADA? Remote web access adopted in the name of expediency. What is the Achilles’ Heel? Naïve or myopic cyber professionals whose over attention to expediency permits convenient remote access for their technical support colleagues with insufficient attention to the exposure that this condition creates.

Discovering What Some Won’t Admit

How to zero in on the problem? The way not to do it is to rely exclusively on pronouncements of SCADA vendors and their like-minded counterparts in the organization who bought into web-based remote access in the first place. There is a good chance at least some of these people overlooked sharing details of remote access vulnerabilities in discussing the system before upper management and traditional security practitioners.

No, the short path to excellence in uncovering self-introduced remote access exposures is to check logs of trouble calls against field records of physical access to work sites. The more serious cyber professionals know to avoid web-based SCADA access from any home and, instead limit access to SCADA terminals that reside behind the secured perimeter of the institution’s work facilities. Maybe a SCADA technician fielding a trouble call won’t have to drive three hours to diagnose the problem at a remote field site, but he may still have to drive 20 minutes to get to a locked and alarmed office that houses a protected SCADA terminal. At least this is the ideal and advertised state of affairs. But even 20 minutes may, in time, seem too much of an imposition, so the SCADA tech quietly arranges to beta test remote access from — you guessed it — the convenience of his or her own residence. Unofficially, without a lot of fanfare. So much so, that even the boss may not realize this is happening, hence the futility of relying on the cyber function to verify its own status regarding this vulnerability. There is another way to check.

Uncovering the Rest of the Story

If expediency has come to trump security, an examination of audit trails will soon show that technician troubleshooting calls at midnight aren’t matching up to midnight access to facilities housing SCADA terminals. Maybe operators in the field are too immersed in the problem to ask or even care how a SCADA tech is responding to a trouble call. They just want help. Maybe the tech is shrewd enough to avoid volunteering details, reasoning that speed of problem resolution is more important than revealing that this is being done from home via means subject to compromise and exposure to hackers.

However, audit trails won’t lie. Whether it is via manual logs, automated access records, video surveillance archives, or a guard’s register used for having all employees sign in after normal business hours, the discrepancy will surface under scrutiny. The on-call tech who was supposed to go to an employer site to troubleshoot the problem on a protected SCADA terminal will have shown no record of having entered any employer business site at midnight. So how did he or she handle the problem? Remotely. From home. In pajamas. Expediently. And, in the process, exposing the system to exploitable vulnerability.

Caution on Experts Offering Homilies about Cyber Attack

The so-called expert who was quick to criticize government officials on this latest cyber attack claimed he was doing so out of concern that the Department of Homeland Security was deficient in sharing information with other water agencies that could be targeted. If he were truly as conversant with water security as he claimed, he would know that it is not DHS but EPA that exercises the role of lead federal agency for protection of the water infrastructure. He would also know that EPA supports Water ISAC, the Information Sharing and Analysis Center for the water sector, and that the Association of Metropolitan Water Agencies manages that function, which takes the lead in sharing this kind of threat information within the water community, while DHS and local fusion centers do their share of distributing such information as well.

Showing no sign of recognizing these particulars, how could this self-styled expert really know what information on this SCADA threat is or is not circulating within the affected community of interest? A skeptic might conclude that such considerations take a back seat, however, when dire warnings can generate free publicity.

IT vs. Ops

Some over zealous IT departments in utilities that use SCADA see SCADA as a means of supplying bandwidth on which to commingle business applications as well, thereby increasing likely needs for remote access by more employees and raising susceptibility to compromise at the same time.

If employees in Operations at water utilities don’t over concern themselves with security deficiencies in SCADA, it tends to be because they have their hands full avoiding one or two catastrophes a year when SCADA techs unthinkingly shut down the system for maintenance or cause some other disruption without telling Ops in advance. The techs forget that flow changes can result in catastrophic treatment or distribution problems that affect water quality. This often occurs after business hours or on weekends, when the techs operate on the assumption that it is the best time to tinker without users noticing or balking — true enough for the average business network, but not for 24/7 attention to water treatment and distribution.

One sign that too many debacles have been surfacing serially is when Ops wrests the SCADA function away from IT. This does wonders for reducing those kinds of snafu.

 

 

August 24, 2011

Calling the Capitol

A seismograph near Middleton Place showed a sudden burst of activity just before 2 p.m. (see hours at left of graph).

More than a few people in the public safety and homeland security sectors are hoping yesterday afternoon’s shallow M5.8 earthquake shook some sense into politicians, bureaucrats and Congressional staffers. The temblor, the largest recorded in the national capitol region in more than a century, caused a large-scale disruption of cellular telephone service when it struck shortly before 2:00 PM EDT. Cellular operators attributed the failure to overloads rather than physical damage to system components. Landline services, including the copper-wire-based public switched telephone network, remained operational and under-utilized.

The growing dependence of Americans on cellular telephone services, especially the extent to which reliance on these devices has displaced older technologies, has raised concerns among regulators and the regulated alike. Phone companies are now having trouble keeping up with the increasing capabilities of the devices we crave. Despite our seemingly elastic appetites for each new generation of wireless technology, our willingness to pay for the infrastructure to support these nifty services has remained relatively constrained. Meanwhile, pressure on companies to improve profitability in an atmosphere of constrained revenues and stiff competition have limited infrastructure spending to such an extent that one wonders whether the price and performance curves will ever be reconciled, even if the economic recovery takes hold.

This harsh reality has fueled pressure from the public safety industry on regulators and legislators to designate and release a large chunk of radio-frequency spectrum known as D-Block for development of a national broadband public safety network. It didn’t take long for advocates of this move to capitalize on the quake to underscore their concerns about the status quo and renew calls for immediate action on the D-Block petition.

You might wonder why overloaded cellular networks are much of a concern to public safety agencies. After all, don’t they have their own radio frequencies already anyway? We’ve invested lots of federal, state, local and tribal government money in the decade since 9/11 improving interoperable communications capabilities. Hasn’t this paid off somehow?

Well, Virginia, thanks for asking. Yes, public safety does have a lot of spectrum and some pretty fancy equipment. This equipment and the slices of spectrum already allocated do a pretty good job of relaying voice communications and a small amount of data. But because of the limitations of these proprietary technologies and the institutional inertia of the agencies who own and operate it, police, fire-rescue and EMS services rely pretty heavily on the same cellular services the rest of us do for high-speed, broadband data applications and services. And like the rest of us, they often use cellular telephones when they only need to relay a message to a single person. That means when we lose cellular service they do too.

But wait a minute, don’t public safety officials have priority access to cellular telephone services? Clever girl, Virginia. Yes, they do. But that doesn’t help much when the number of priority calls alone are sufficient to swamp the system. Imagine, if you will, how many people in Washington, D.C. and along the eastern seaboard consider their need to communicate with someone right this second more important than anyone else’s. Besides not every public safety agency has configured its equipment and paid the fees necessary to obtain this sort of priority access.

Cellular network operators say most services returned to normal within about 20 minutes of the earthquake. One suspects that the decision to release many (so-called) non-essential government workers early was predicated at least in part on a desire to alleviate further strain on the region’s already overburdened systems and services. At the same time, one has to wonder what this cost both in terms of lost productivity and public image.

By most accounts, the earthquake, despite its surprising intensity and duration, caused relatively little physical damage. But the fiscal damage of the decisions yet to come remains to be seen.

August 19, 2011

Urbanization and professionalization suppress resilience (!?)

A  firefighter, a  cop, and an emergency manager walk into a bar.  This is not a joke.  I was with the three of them.

One had red wine, another had a beer, the third ordered scotch.   I was drinking Dry Sack on the rocks with a twist.

Can you guess which one had which drink?  Can you guess which offered what to the conversation:

“The problem is everyone is in denial about the worst risks.”

“New Orleans after Katrina was simple compared to Sendai after the tsunami.  How about Memphis after New Madrid or LA after the big one?” You can know the real pros by whether or not they pronounce it Maaadrid, as in really crazy.

“How about DC, Pittsburgh, and Birmingham after New Madrid?  How about pipelines, rail bridges, interstates, and the Eastern Interconnect after New Madrid?”  Hows about every little town downstream from a dam?

“How about the whole economy for the next ten years after Long Beach is taken out? I don’t care if it’s tsunami, pandemic, or an IND.”

“How about the whole economy if some cyber-anarchists decide to really screw with credit cards and ATMs?”

“As long as they vaporize my mortgage too.”

The bar talk was not as grim as this suggests.  Extended conversations with this crew are like a public reading of Dante’s Inferno (no Paradiso) with a running commentary by the comedian Lewis Black.  You roar with laughter over a comment that ought not be documented here.   A slightly sick sense of humor is essential to survival in these professions.

“We’re the real problem,” one guy said wrapping his arms around the shoulders of those on either side.  “We’re too good.  Why worry when the A team’s got your back?”

“Just call 911 and the cavalry always comes.”

“Even under fire… hell, with radioactive brimstone falling from the sky.”

“Thing is, we’re really good at the everyday stuff and lots of the tough stuff.”

“Did you hear about the 911 call because the citizen thought her remote had been stolen.  Cops found it in a drawer.  They responded!”

“That’s the problem, we are so #$!@ responsive we’ve trained the citizens to depend on us.  When the big #$!@ happens they just wait around.”

“Not everyone.”

Practically EVERYONE!”

“There’s two big pile-ups:  real increasing dependence. Who grows their own food anymore?  Who even eats at home? And where does our food come from? Not anywhere close.  Second pile-up: The #$!@ complicated system works really, really well until it doesn’t work at all.  So there’s no obvious reason to pay much attention, until it’s too late.”

“So… what we’re really good at is hiding the problems?”

“Sure.  There’s a fire.  You put it out.  You get ’em temporary housing or they go to the in-laws.  I keep gawkers away.  Everything’s fine. No worries. But in Joplin or Tuscaloosa? Even those huge twisters were tiny compared to what we’ll get when the wrong fault shifts under 5 million or a wildfire overwhelms San Diego.  Hows about a CAT 5 and flood surge pounding Miami-Dade?”

“When they call 911 no one will answer, they won’t even get a #$!@ dial-tone!”

“It doesn’t take such a big hit.  Maybe catastrophe comes on little cat feet?  You read Ted Lewis’ new book?  The complex systems we depend on are so intricate  just one little complication and the consequences cascade.”

“Sort of like the 2003 blackout caused by tree branches in Ohio?”

“But the cause wasn’t tree branches, it’s the way WE build and manage systems. Tree branches are a preexisting condition.  Our choices create the vulnerabilities.”

“You know when I was a little kid,” (the guy to his right mimicked the Staten Island accent) we had a farm right down the road.  It’s a landfill now.  The big farms in Jersey, they’re all McMansions.  Mom and pop get their broccoli and peas from California just like all of us.”

“You know what though? The beers alot better than back then.  Hey waitress, another round here.”

August 10, 2010

End dependency on fossil fuels by driving on solar panels

Filed under: Infrastructure Protection — by Christopher Bellavita on August 10, 2010

In February, I wrote about a colleagues idea in a post titled “How to create a resilient infrastructure in 20 years for 1 trillion dollars, create millions of jobs, transition to green transportation, and do all of this at no cost to government.” That post is here.

A friend (thanks, George) recently sent a video to me (below) that describes another creative infrastructure idea:

“cover all concrete and asphalt surfaces that are exposed to the sun with solar road panels. This will lead to the end of our dependency on fossil fuels of any kind.

“We’re aware that this won’t happen overnight. We’ll need to start off small: driveways, bike paths, patios, sidewalks, parking lots, playgrounds, etc. This is where we’ll learn our lessons and perfect our system. Once the lessons have been learned and the bugs have all been resolved, we’ll plan to move out onto public roads.”

(You can read more details about the Solar Roadways project at this link: http://solarroadways.com/vision.shtml

I showed the video illustrating the solar roads project to some engineering friends.  Here’s part of the resulting conversation:

Dr. R — That’s totally cool. I’d need to be convinced that you could manufacture this stuff as cheaply as asphalt and more importantly, that the total cost of ownership is lower. But how cool would it be to have this running up to your house? You’d get rid of all the lines that are there now and run it all thru this.

Dr. T — This is orders of magnitude better than [the idea posted in February]!  But the bureaucracy and red tape cutting to do this is horrendous.

Dr. T — Question: If you charge power and telecom companies to use it, you could not only pay for it but make a return on investment.  But does it work? Driving a million semis over circuits every week is much different than a lab test.

Dr. R — Yea, durability is the key. I won’t be convinced until someone funds a real test case that we can carefully observe for a few years with heavy traffic. Lots of trucks!  Of course, you’ll have the occasional 15 year old hacker who finds a way to spell swear words in the LEDs but that would be cool too.

Dr. T — You can read your email while driving on it! Generally power engineers don’t believe in this idea because they understand the physics of long haul transmission and it isn’t friendly. But I think they [power engineers] have not considered an alternate architecture that incorporates storage. Flywheels, compressed air and batteries are not integrated into their models.

The glass highway project plus storage could change all that, but the grid would have to operate as a store-and-forward network rather than as a big electronic circuit. That is, we need about a decade of research that is orthogonal to current linear incremental thinking about the grid.

Here’s the 4:38 solar roadways video:

March 30, 2010

85% More From The Private Sector About Critical Infrastructure

Filed under: Infrastructure Protection — by Christopher Bellavita on March 30, 2010

I was reading a paper by my colleague Nick Catrantzos  yesterday when I came across this sentence:

“…infrastructure defense is assumed to fall primarily into the hands of the private sector, which operates 85% of critical infrastructure.”

I ranted a year ago about the 85% number in a post that appeared on this blog.

The Number simply won’t die. It lives beyond truth or lie. Its reality is independent of time and space.

So I wrote back to Nick summarizing what I believe is the problem with The Number.

Nick — who loves the English language as a gardener treasures orchids — once presented me with a knit picker.  So he is aware of my tendency to occasionally pole vault over mouse turds.

Nick also has spent time in the same Circus and has been known to pick a nit or two, so he responded back with some evidence about the 85% number.  I pushed back.  He returned fire.  As did I.

Then he wrote something that shined a light on a bias I did not see I had.

A year ago, I wrote:

…the 85% figure has been used to justify a laissez fair critical infrastructure strategy. Private sector “ownership and control” has been interpreted to mean government frequently has to ask politely before it tries to do anything to improve safety and security.

If the 85% figure is wrong — or at least unsupported by any empirical basis — maybe the policies derived from that belief are also wrong.

Basically, I thought the 85% number was used to justify the government not pushing the private sector hard enough when it comes to protecting critical infrastructure.

Nick — who is a security manager and former security consultant for public and private organizations — described how this “who owns what” issue looks from the private sector.

My dilemma, perhaps a distant cousin to your own, has been in encountering an obdurate, logic-proof insistence by cops, fire fighters, emergency managers, fusion center staff, and DHS minions to define my employer and all critical infrastructure stewards as private sector entities.

It does not matter how much we demonstrate that we are a public agency and a regional extension of government.  As far as these people are concerned, we are private, hence unworthy of sensitive information (even if we were the ones to originate it) and inherently suspect of being profit driven (no matter how many wasteful, feel-good programs we underwrite for some avowed public good).  Even being part of the same retirement system and driving vehicles with tax-exempt license plates — two surefire convincers everywhere else — have no impact in shaking the conviction that we are infrastructure stewards, hence private sector mercenaries.

My unproven suspicion is that much of what is at the bottom of this categorization is a sort of tribal urge to satisfy two unstated objectives:

1.  Limit the in-group to an established comfort zone and organizationally and traditionally familiar faces.

2.  Assure that the existing in-group gains and keeps primacy at the trough of grants and other funding destined for public sector actors who are new both to homeland security and critical infrastructure protection.

If there are points to this fugue that resonate with me as an infrastructure steward, they are these:

A.  Critical infrastructure is definitely in both public and private hands.  Given the types of infrastructure that exist, it is reasonable and credible to accept that they are mainly privately owned and operated.

B.  Whether that percentage figure of 85% is anything more than an approximation or an archly crafted statistic meant to advance an ulterior agenda is mildly interesting to an infrastructure steward. At the end of the day, the hand on the wrench or on the SCADA system comes from the same gene pool, skill set,  and population.

C.  Even a critical infrastructure operation that is entirely managed by a public agency is going to have some private sector involvement and exposure.  Construction comes to mind.  We are always building or modifying facilities and upgrading systems.   Contrary to popular belief, even the wealthiest of public agencies cannot hire everyone they meet.   Contractors and subcontractors are as ubiquitous as they are indispensable.

D.  The original point of emphasizing private ownership and operation, to the extent I absorbed one, seemed to be as a means of emphasizing that protecting critical infrastructure is a shared responsibility and one that would be imperiled by ignoring private sector stakeholders. That point still makes sense to me.

February 23, 2010

How to create a resilient infrastructure in 20 years for 1 trillion dollars, create millions of jobs, transition to green transportation, and do all of this at no cost to government.

Filed under: Budgets and Spending,Infrastructure Protection,Technology for HLS — by Christopher Bellavita on February 23, 2010

The title of this post is a bit big.  But nowhere near as huge as the idea behind it.

The basic concept is to build new underground electric power transmission lines, natural gas pipelines, and telecommunication, cable TV, and Internet communication lines on rights-of-way already established by America’s 40,000 mile Interstate Highway System. The Interstate Highway System reaches nearly every part of the nation, and states own the rights-of-way along these roads. It makes sense to leverage this asset.

The idea — called the National System of Resilient Infrastructure (or NSRI) — was developed by Ted G. Lewis, at the Naval Postgraduate School.  Here are the details of this $1,000,000,000,000 idea:

—————————————-

Proposed

Electric power, energy for transportation, and telecommunications capacity are three major economic drivers for the future economy of the USA.  But these sectors are in trouble, for a variety of reasons, including NIMBY (not in my back yard), lack of investment, and lack of vision.

To overcome these barriers, stimulate the economy, and develop a resilient infrastructure for the 21st century, the author proposes a “moon shot” scale effort to build a national system of resilient natural gas, electricity, and telecommunications infrastructure along the 40,000 miles of Interstate Highway.

This 20-year, $1 trillion project would be implemented by a public-private partnership structured much like a GSE (government-sponsored enterprise), and mainly funded by the private sector. Besides creating millions of jobs, enhancing our ability to transition to clean cars, trucks, and buses, the national system would be immediately self-sustained by usage fees, and therefore profitable. It would not cost the government any money, and would have an immense impact on the economy.

Infrastructure Equals Prosperity

The Dwight D. Eisenhower National System of Interstate and Defense Highways, commonly called the Interstate Highway System (or simply the Interstate) is the largest highway system and largest public works project in the world. More importantly, it propelled the United States into a new era of prosperity. Today, virtually all goods and services are distributed via the Interstate, which is still expanding.

In the 1990s the 25-year old Internet was commercialized, stimulating economic growth so much that it produced a bubble in 2000. Yet, the federal government’s $200 million investment has already returned 100-fold on investment, after less than 20 years of growth. The future of the global economy increasingly depends on the Internet.

It is clear that relatively modest investments in infrastructure reap exponentially large returns due to economic growth, job creation, and innovation. Since ancient Rome, no nation on earth has achieved or maintained greatness, security, and prosperity, without plentiful energy, robust communications, and transportation capacity.

The economy of the 21st century will run on electrical power and Internet packets. Without these, the USA will slip into fourth or fifth place among nations.

The Challenge

The United States faces an “infrastructure challenge” and an equally big opportunity, today. The challenge is to rejuvenate our failing basic infrastructures: water, power, telecommunications, and energy.

Progress in green energy generation is stalled because of inadequate transmission capacity. Telecommunications capacity must be greatly increased to accommodate global 3D virtual reality, multi-party conferencing, and high-performance research and development in medical, environmental, and technical industries. Think of the possibilities of telemedicine piped directly into your home, or corporate meetings conducted with 100,000 participants from around the globe.

Advances in material science, bioengineering, medicine, green energy, revolutionary telecommunications, and green transportation will present great opportunity over the next 20 years to those nations prepared to capitalize on them.

These are the economic drivers of the future, but they require advanced infrastructure.

We know how to turn sunlight into electrons, but lack the distribution channel to transport electrons produced in New Mexico to markets in New York. We know how to telecommute via our computers, but lack the bandwidth for two-way, 3D telecommunication between grandmother and granddaughter across the continent. We know how to automate transportation systems to reduce auto accidents and congestion, but our highways are “dumb.”  In the next 20 years, cars will run on electricity and natural gas, but we lack the infrastructure to refuel them while achieving energy independence.

Venture capital is pent up, waiting for government to stimulate a “green economy,” but we do not currently have the market distribution infrastructure to make it possible.

We need a National System of Resilient Infrastructure (NSRI) to take advantage of opportunities that will create jobs and keep America economically strong.

The Solution

The National System of Resilient Infrastructure plan (NSRI) is designed to address two roadblocks in the way of the next stage of economic growth: NIMBY, and the enormous cost of rebuilding the power and telecommunications infrastructure of the 21st century.

NIMBY (Not-In-My-Backyard) is currently blocking many projects because people do not want power lines in their backyards. In addition, infrastructure is enormously expensive and unattractive as an investment because it does not give companies a competitive advantage. For example, the current 1 trillion dollar electrical power grid is fragile due to a lack of transmission capacity. It is also based on 1940’s technology. But who can afford to invest 1 trillion dollars to rebuild it?

NSRI proposes to avoid NIMBY by placing critical infrastructure underground. NIMBY can be avoided by building underground electric power transmission lines, natural gas pipelines, and telecommunication/CATV/Internet communication lines on rights-of-way already established by the Interstate Highway System. States already own these rights-of-way, and the Interstate Highway System reaches nearly every part of the nation. It therefore makes sense to leverage this asset even more so.

Energy, Power, and Communications infrastructure also requires storage nodes (for surge resilience), “service stations” (for distribution), and several network operation centers. The NSRI will be resilient because of its storage, security, and distributed architecture [decentralized assets].

Robust and redundant, able to transmit commodities such as Internet packets, electrons from solar farms, natural gas for future cars, trucks, and buses, and bountiful electrical power for future cyber businesses, the NSRI will be a quantum step forward for the nation and the economy.

NSRI is America’s 21st century “moon shot.”

How to Pay for It

The NSRI network would be constructed much like the Interstate Highway network, over a 20-30-year period at an estimated cost of $50 billion per year.

The author estimates it would cost $25 million/mile to build the necessary tunnels, pipes, wires, etc. The Interstate is 40,000 miles long, hence a total estimated cost of $1 trillion over 20 years.

This may seem high, but it represents 3.6% of the combined revenues of the natural gas, electrical power, telecommunications, gasoline, and broadcast industries, see Table I.

infrastructure-sector-revenues

The Interstate Highway System is “pay-as-you-go”, with 90% of the funding coming from the Federal government, and the remaining 10% from the States. In its first year of construction, 1958, total costs were $37.6 billion. By 1991, the cost was $128 billion. But these billions contributed nothing to the national debt because they were paid for by a 40 cent per gallon tax on gasoline. Title II of the Highway Revenue Act of 1956 created the Highway Trust Fund to collect and dispense funding for the Interstate System.

Similarly, the NSRI would be financed through a Trust Fund established by Congress to create and operate NSRI. The NSRI financing plan needs to be worked out in detail, but two attractive options are: Option I: GSE (Government Sponsored Enterprise), and Option II: excise taxation, similar to the model used by the Highway Revenue Act of 1956.

Ultimately, the NSRI must be self-sustaining, through revenues generated by its use. A toll fee would be charged for use of the pipelines, communication lines, storage facilities, and service stations. These fees can be based on current regulated fees charged by telephone, utility, and pipeline companies – a familiar fee structure for these industries.

Option I: GSE: Ginny Mae, Sallie Mae, Fannie Mae and Freddie Mac are GSEs, i. e., government-backed enterprises listed on stock exchanges, and therefore, investor supported. The idea here is to raise the major portion of funding from investment banks, retirement funds, and personal investors through an IPO [initial public offering]. Like a GSE, the NSRI Trust Fund would be backed by the Federal government, and at some point reach a self-sustainable level through usage fees. This model, however, would probably require temporary taxation to raise the full $50 billion needed to initiate NSRI.

Option II: Excise Taxes: The Interstate Highway System was funded by a $0.40/gallon tax on gasoline (part Federal and part State). This tax can be rolled back as expenses are replaced with usage fees. Consider this: a 3.6% excise tax on revenues shown in Table I would raise $50 billion per year. Alternatively, an additional $0.40/gallon excise tax would raise $56 billion per year.

Both options are no-cost options for the Federal Government. Both options follow the Interstate Highway model whereby States own the infrastructure. Unlike the Interstate Highway model, however, the NSRI can easily achieve sustainability through an industry-accepted fee structure.

—————————————-

Dr. Lewis can be reached at tlewis[at]nps.edu

August 27, 2009

How To Improve Homeland Security: A Universal Risk Assessment for America’s Railroads

Filed under: Infrastructure Protection,Risk Assessment — by Christopher Bellavita on August 27, 2009

America’s trains carry more than 12 million passengers every weekday.  There have been no successful attacks on US rail systems in recent history.  Globally, however, railway systems remain an attractive target for terrorists.

Between 1998 and 2003, there were more than 180 attacks on trains and related rail targets around the world.  Terrorists have attacked railway systems most dramatically in Mumbai, Moscow, Madrid, and London, killing hundreds and injuring thousands.

What are America’s railroads doing to prevent a similar attack?

In January 2009, DHS reported “that more than 75% of the nation’s major rail and bus systems aren’t meeting [voluntary] Homeland Security guidelines” established in 2007.   The same report, according to a story written by Frank Thomas, found that “96% of airlines are complying with security requirements.” [my emphasis]

I don’t know enough about rail security to know what to make of the comparative findings. But I do know that guidelines are not the same as requirements. As a TSA leader phrased it, there is no penalty for failing to comply with guidelines.

Two years ago, The RAND Corporation released “Securing America’s Passenger-Rail System,”  offering a framework for railroad security planning.  As far as I know, it remains the most comprehensive treatment of the vulnerabilities and threats faced by American railroads.

To understand railroad system vulnerability, RAND “identified 11 potential target locations (e.g., system-operation and power infrastructure) within a notional rail system and eight potential attack modes (e.g., small explosives).  These targets and attack modes were combined to produce 88 different attack scenarios of concern.”

Today’s guest blogger is a security executive with a major rail system.  Her idea about improving homeland security begins with a different kind of scenario.  She outlines a vulnerability created by the networked nature of America’s railroads, and suggests what can be done about it.

Here’s the scenario:

Assume that Rail Carrier A institutes specific security procedures based upon its own risk assessment. Rail Carrier B shares track with Carrier A but does not prioritize the trains entering A’s environment based upon A’s risk assessment.

Security measures on B’s trains are limited.  Because A and B trains operate simultaneously in the same environment it is possible that the security efforts of A are less effective because of B’s inadequate measures. Both Carriers are operating under individual risk assessments, but the inter-connectivity between the two carriers has not been adequately addressed.

Now, what to do about this vulnerability:

1. What one sentence best describes your idea about how to improve homeland security?

The Department of Homeland Security should conduct a universal rail transportation vulnerability assessment to effectively address national risk.

2. Describe the idea in more depth.

The Department of Homeland Security (DHS) through the Transportation Security Administration (TSA) requires rail transportation entities, both passenger and freight, to conduct vulnerability/risk assessments.  The TSA does not identify one methodology for conducting these assessments.   In order to better assess the vulnerability of the nation’s rail and mass transit systems, the TSA, as directed by the DHS, should conduct a universal rail risk/vulnerability study with one defined methodology to accurately assess the entire inter-connected national rail system.

In many areas track is shared by freight, regional and other passenger rail systems.  Although each of these entities conducts risk and vulnerability studies, they are not shared among the carriers or effectively evaluated from an overall homeland security perspective.

A universal approach would better reveal high risk locations and could assist individual carriers in determining how to effectively deploy limited resources. The risk and vulnerabilities can then be prioritized on a broad scale and evaluated to maximize the effective use of federally and otherwise funded security projects.

3. What problem does your idea address?

It is undeniable that rail, both freight and passenger systems, are key components of the United States’ critical infrastructure. It is also well known that the rail transportation sector is a preferred target for terrorists.  Independent risk assessments, which may not accurately reflect inter-connectivity, will not be effective in determining the actual vulnerability of our national rail system and, subsequently, assist in accurately deploying security resources.

4. If your idea were to become a reality who would benefit most and how?

The traveling public would be the primary beneficiary of a universal assessment.  A broad based evaluation of risk may increase security by placing the limited resources where they are most needed.

Individual rail companies have separate owners, budgets and priorities.  They add security measures and harden targets that are important to them as individual carriers.  This go-it-alone strategy may only result in pushing the terrorist to a less vulnerable target, instead of using a nationally defined risk to improve the security of the entire system.  Adding security improvements on a broader scale may deter a terrorist from attacking the transportation sector as a whole.

5. What are the initial steps needed to get the idea off the ground?

The DHS must take a more active role in the overall security of the rail system than it has to date and promulgate a federal regulation or directive.  Resources would be needed to define the risk methodology and to conduct this assessment in coordination with the rail carriers.

It is possible that there may be limited support for this new assessment from rail carriers because assessments have already been completed.  Consequently the value of what I am proposing may not be understood or accepted.  Funding to conduct the assessment is also a significant issue.

Individual next steps will include promoting the idea through the appropriate chain of command in the various Carrier groups, and obtaining permission to discuss the concept with an appropriate member of the TSA.

6. Describe the optimal outcome should your idea be selected and successfully implemented? How would you measure that outcome?

In the best case, all rail transportation would be universally assessed based upon the same methodology.  Security resources and funding awards would be deployed based upon these assessments.  Completing this universal assessment and resulting recommendations for a safer rail system could be a measure of success.

But the key to a safer rail will not be a report, but changes in rail security implemented because of the new assessment. The desired outcome will be to harden the entire rail system and make it a less attractive target for terrorists.

As in many cases, measuring the effectiveness of any security enhancement may not be possible.  But with a security approach derived from a universal rail sector risk assessment, we can achieve a new level of confidence in the security of America’s railroads.

May 29, 2009

Long-Awaited Cybersecurity Announcement and FEMA visit

Filed under: Cybersecurity,Infrastructure Protection,Preparedness and Response,State and Local HLS — by Jessica Herrera-Flanigan on May 29, 2009

At 10:55 this morning, President Obama will announce the long-awaited plans  for dealing with cyber security in his White House.  A cyber czar, albeit at a level lower than desired (special assistant), will be supported by a new cyber directorate within the National Security Council.  That person will also report to the National Economic Council. Expect the announcement will be broad in scope and discuss goals for dealing with the global threat of cyber security, as well as address such issues as a public awareness campaign for the challenges of cyber security and the need for a strengthened technology workforce in the U.S.

The 60 day review (that ended approx 30 days) ago, led by Melissa Hathaway, is the fourth attempt in the last 12 or so years to address cyber security.  In late 1996, President Clinton created the Presidential Commission for Critical Infrastructure Protection (PCCIP) that issued a report on its findings in 1997. That effort led to the 1998 Presidential Directive-63, the emergence of ISACs, and the creation of the National Infrastructure Protection Center (NIPC) at the FBI and the Critical Infrastructure Assurance Office (CIAO) at the Department of Commerce, among other organizations at various agencies.  Those two are worth noting as we continue, a decade later, to see a tension, as evidenced by the dual NEC and NSC reporting announcement expected today, between law enforcement/security and economic/commerce interests in cyber security.   Interestingly enough, the term “cyber czar” originated during that time – Dick Clarke in the White House.

In 2003, President Bush released the Clarke-led National Strategy to Secure Cyberspace which provided recommendations for “government-industry” cooperation.   Soon thereafter Clarke left the government. The strategy laid a framework for how the federal government would try to address cyber issues and promoted public-private partnerships.  DHS’ leadership on the issue was laid out about this time with the merger of most of the major cyber functions (NIPC, CIAO, FedCert, etc) into a new National Cyber Security Division. These efforts led to the creation of sector coordinating councils and the National Infrastructure Protection Plan (NIPP).   There was wide-spread criticism that the Director of the NCSD was buried too far into DHS and the nation needed a WH czar. Congress responded by creating an Assistant Secretary position at DHS.

Round three happened in 2008. President Bush initiated the Comprehensive National Cyber Security Initiative.   The CNCI, officially established in January 2008 (though rumored as early as Sept 2007) by National Security Presidential Directive 54/Homeland Security Presidential Directive 23 was a multi-agency, multi-year plan laying out twelve steps to securing the federal government’s cyber security networks.  DHS would have the lead (mostly) on civilian systems while DoD would take the lead on .mil systems.  The role of NSA and the DNI was questioned, though hard for most to pen down given the classified nature of the program. By this point, the White House had a  Special Assistant to the President and Senior Director for Cybersecurity and Information Sharing Policy, Neill Sciarrone, and a multi-agency task force headed by Melissa Hathaway leading the CNCI efforts.  DHS, meanwhile, also created a Deputy Undersecretary for cyber at the National Protection and Programs Directorate – a role fulfilled by Scott Charbo in the Bush Administration and by Phil Reitinger in the Obama Administration.   Silicon Valley guru Rod Beckstrom was brought in as the First Director of the National Cyber Security Center.  He left several months ago, claiming that the NSA and intelligence agencies were taking too much of a leading role in the cyber efforts.

That leads us to today’s announcement in a few hours.  While in a condensed timeframe, there is much history in the nation’s cyber security efforts. Today’s efforts will set a framework – even if broadly- for how we are going to tackle round four.  The real question will be whether we can advance our efforts or will we be repeating this exercise in a few years.  Stay tuned for a more in-depth analysis of the cyber security analysis this afternoon.

Also worth noting – after the cyber announcement,  the President will attend a hurricane preparedness meeting at FEMA headquarters.  Hurricane season is only a weekend away so FEMA’s preparedness efforts and posture are critical.

« Previous PageNext Page »