Homeland Security Watch

News and analysis of critical issues in homeland security

June 22, 2011

Are Clouds Getting in the Way?

Filed under: Budgets and Spending,Technology for HLS — by Mark Chubb on June 22, 2011

Judging solely from the tweets emanating from the Urban Area Security Conference this week, two topics were at the forefront of discussion in and between sessions: cuts in the number of metropolitan areas receiving funding (and indeed the nature and extent of homeland security funding cuts generally) and issues attending advanced technology. I find it hard to separate the two topics, especially in light of the fact that much of the discussion about technology at the conference seemed to focus on the central role vendors played in the conference proceedings.

In some circles (certainly not here) the term networking almost always involves sophisticated technology and considerable cost. You know one of these conversations is spinning out of control when terms like “cloud” no longer refer to the things that shield us from the sun and occasionally deposit rain on our heads.

LIke real clouds, these terms and the discussions in which they get exchanged often obscure much more fundamental problems. My favorite example of this is the ongoing discussion about public safety communications interoperability, especially the push now on in Washington to get a sizable chunk of 700 MHz  spectrum allocated to a nationwide public safety service, the centerpiece of which presumably will include secure broadband data.

Now it’s quite possible that I have already lost a few of you, because, as I said, these terms often have meanings far different from what you might expect. Let’s start with interoperability. I once thought this meant making it possible for police, fire, EMS, public works, and other agencies at all levels of government to exchange information about an incident to which they had all responded and to do so in whatever way was most appropriate. The key was sharing information.

An optimist would tell you I was at least partly right about that. But I am not that optimist, since I have yet to see any evidence that such a system exists in the wild.

Instead, interoperability has meant marrying up sometimes terribly outmoded or outdated technologies so people from different agencies can get together and talk about an incident if they happen to remember to use the technology in the way someone set it up when the time comes to use it. In most cases, the systems have become too complicated for the users to understand, and because they cost so much they rarely keep pace with the commercial-off-the-shelf equipment people buy and use for their own personal communications.

How many of you have been to an incident where a frustrated officer has pulled out her iPhone and texted or called a colleague rather than using a radio? If you haven’t seen this, you have surely seen someone at an incident pull their smartphone out and snap a few pictures of whatever is happening.

These days you don’t have to look very hard or listen very closely to see and hear arguments about how D-Block spectrum will revolutionize public safety communications and make it easier than ever before to communicate in a crisis. While I have no doubt that devices and services designed for this new spectrum will have impressive features, I am much less certain they will improve communications.

My reason for skepticism comes back to the first problem receiving attention at the UASI conference: money. The people who have it and can afford to spend it will determine what the rest of us can buy later. Perhaps fortuitously federal fingers are finding it harder to reach the wallet in Uncle Sam’s deep pockets just as this issue comes to a head.

Oddly enough, the dark clouds of fiscal austerity might be just what we need to whisk away the airy, bright and lofty clouds of “technological progress” impeding or at least obscuring our efforts to communicate. When money is scarce, people have to be a lot clearer about what they need now as opposed to what they want later. In addition, they have to be more open to alternatives and willing to adapt as opposed to simply adopting.

If you don’t believe me, consider this: The argument presented here emerges from my own first-hand experience and a quick reading of a handful of messages consisting of less than 140 characters sent by a handful of friends using an essentially free technology accessible to anyone. That strikes me as pretty effective communication for a very limited investment of time, money and effort.

May 12, 2011

Advances in Public Messaging: Great Idea – Curious Reaction

Filed under: Catastrophes,Technology for HLS — by Arnold Bogis on May 12, 2011

A seemingly great technological step forward in disaster communication with the public is thrown under the political bus:

President Obama could soon have the ability to personally text message every single cell-phone-toting American — whether  they like it or not — with “critical emergency alerts” under a new federal program that civil libertarians and political opponents say is a Big Brother-like intrusion posing a high risk of political abuse.

Federal officials in New York yesterday unveiled the three-tiered emergency alert system that would blast messages about Amber Alerts, impending weather disasters and terror threats to mobile devices.

Cell-phone users could opt out of most alerts if they want to, but not the texter-in-chief’s presidential pages.

“It’s like the state rep sending out mailings about how wonderful they are,” said Tad Kasperowicz of the Quincy Tea Party. “President Obama says,’Here come the high winds and the thunderstorms’ and it’s not really an emergency, but, hey, he gets his name out to every cell phone in the area. I can see that. Absolutely. There’s potential for abuse there.”

Sure, there is potential for abuse…if you believe the party affiliation of the people currently in office will always hold that office.  In other words, where is the political advantage for Obama to start such a system if it can be politically exploited when a Republican will at some point come into control of it?

Perhaps this is just the natural, and extremely positive, evolution of a public advisory system that had been lagging behind technological developments. However, I may just be naive…

 

May 10, 2011

Controlling domestic UAVs wirelessly through a cellular network: major policy challenges

Filed under: Technology for HLS — by Christopher Bellavita on May 10, 2011

Seven decades ago,  C. P. Snow gave a lecture about a communication breakdown between two cultures — humanities and science.  Their inability to understand and value the way each viewed the world inhibited the search for solutions to the world’s problems.

Public policy in general — and homeland security policy in particular — may suffer from the same divide.  I know few homeland security policy makers who understand or appreciate the technical dimension of the enterprise; and even fewer homeland security scientists and technicians who value the daily dilemmas of policy makers.

I was brought rudely to this awareness some weeks ago when a journal I’m involved with published several technical articles related to homeland security.  I did not find the articles especially easy to read.  But as I struggled through them, I found them to be models of organized discourse and presentation.  Eventually, with some effort, reading them paid off for me.  I saw a side of homeland security I had been closed to.

Today’s post was written by a colleague who bridges both the scientific and the policy worlds of homeland security.  For organizational reasons, he prefers to remain anonymous.

————————————————————————————————-


Recently the Journal of Homeland Security Affairs published an article called “Policy, Practice, and the Search for Alpha” by Dr. Robert Josefek.  The article provides an overview of five papers that were judged best-in-track and best-in-conference from the 2010 Institute of Electrical and Electronic Engineers (IEEE) Homeland Security Technology (HST) Conference, the tenth annual meeting of this group.

In addition to his review of the scientific papers, Dr. Josefek points to the chasm that often exists between the detailed concerns of scientists and the broader focus of policy makers:

While both ends of the spectrum are important, my observation is that it is sometimes a challenge for these groups [scientists and policy makers] to understand and best benefit from each other. Yet innovations in science and technology can enable policy options that were not previously available and policy goals can drive scientists and technologists to find ways to reach heretofore-unobtainable objectives.

Among the papers reviewed one in particular illustrates in my mind the intersection of policy and science for homeland security related research.  Daniel and Wietfeld’s article, “Using Public Network Infrastructures for UAV Remote Sensing in Civilian Security Operations,” was selected best for the category of Attack and Disaster Preparation, Recovery and Response.

The paper proposes a method of employing multiple Unmanned Aerial Vehicles (UAVs), controlled wirelessly through a cellular network, to monitor atmospheric plumes from events such as large fires, industrial accidents, and CBRN terrorist attacks.

The authors’ central idea is to use cell towers because the public safety spectrum is extremely limited, and unlicensed frequency (ISM band) is often unreliable.  A GAO report in 2008 (pdf file)  corroborates their claim and cites wireless communications “security and protected spectrum” among one of the critical requirements for integration of unmanned aircraft into the National Airspace (NAS).

—————————————–

My first impression of their idea was skepticism.  Public safety organizations have come to be wary of the reliability of cell phone systems during times of disaster due to the cell towers being disabled or call volume simply overwhelming the ability to even access the system.

Despite Full Motion Video (FMV) being excluded from consideration in the paper, FMV will be an important aspect of the system and provisions will need to be made to accommodate the bandwidth required.  Coupled with this will be the need to develop a seamless handoff amongst different carriers as the UAV traverses a geographic region.

The Federal Aviation Administration rightly views safety paramount and protocols will need to be developed to allow manned and unmanned aircraft to safely coexist.  The public will certainly have privacy concerns over information that is collected and stored.

Finally, there remains a suspicion by public safety of the cellular industry due in part to problems experienced from Nextel building out a network in the 800 MHz band in the 1990s that ultimately caused significant interference with existing public safety communications.

—————————————–

Feeling there was more to the issue, I reached out to a friend to get another perspective.  He is a former Coast Guard pilot and now runs a company that operates a fleet of surrogate unmanned aerial vehicles. His company provides ground based users UAV capabilities without any of the restrictions associated with operating in the National Airspace System, and has accumulated over 2,000 operational hours in the last 2 years operating nationally over both congested and rural areas.

Last year during the Deepwater Horizon Oil Spill, his company’s  systems operated 12 hours per day for more than 100 consecutive days and provided command centers with real time full motion video over a 10,000 square mile area covering the waters off Mississippi and Louisiana.

His perspective of Daniel and Wietfeld’s paper differed from mine.  He supports the authors’ position advocating greater leverage of existing cellular infrastructure and reinforcing it with mobile ad-hoc networks (MANETs) and satellite links in order to maintain connectivity in dead spots.  He mentioned that as a Coast Guard officer he supported operations following Hurricane Katrina and told me:

During Hurricane Katrina wireless air cards in helicopters provided limited airborne connectivity and the solution worked remarkably well at the low altitudes and airspeeds most helicopters were operating at during the response.  In the years since Katrina, we have learned a great deal about the strengths and limitations of using cellular networks to support operational missions and found that augmentation with MANETs and satellite communication is critical to ensure mission reliability.  We have found that use of the 2.4GHz ISM band in rural or open ocean areas works very well, but it is significantly degraded in urban environments or disaster base camp settings.  We have also found that using 5GHz in those same areas adequately addresses the interference issues experienced at 2.4GHz, which serve to reinforce Daniel and Witfield’s findings.

He and I did find common ground in that the call for small unmanned arial vehicles operating at low altitudes is going to multiply exponentially over the next few years over our states and cities.  Assigning each an IP address and managing them through existing and augmented wireless infrastructure is the only manageable path.

—————————————–

In order to incorporate UAVs into both civilian and emergency response applications many groups are going to have to come together.  Some of the most critical components include the FAA introducing Next Gen Air technologies to accommodate the coexistence of manned and unmanned aircraft.  At the same time that legislation is progressing on FAA Reauthorization, Congress is also moving forward to modernize Public Safety Communications.

Spectrum policy will have to be revised to allow safe and reliable control of unmanned vehicles. Among the proposals is to build out a nationwide cellular communications network for public safety in the 700 MHz spectrum recently vacated by TV stations.  I do not know if technically it would be feasible to link the two together but it might be a consideration for the public safety UAV industry to investigate using the new network while in the planning stages.  The alternative is to “beef up” the existing cellular networks in order to provide daily unmanned vehicle operations and ensure operability during times of disaster.

At the very least, part of the Senate version of the FAA reauthorization (S 223) tasks the National Academy of Sciences to study the unmanned aircraft spectrum issue. (Not everyone is pleased with this initiative, however.  See this link.)

Daniel and Witfield have helped identify a primary technical issue that homeland security leaders will have to contend with as the UAV industry attempts to move forward.  Policy makers will need to address these issues in developing a workable solution for daily use as well as during times of emergencies.  Daniel and Witfield’s paper will help policy makers understand the components of a technically sound plan.

 

April 11, 2011

Happy National Robotics Week

Filed under: Technology for HLS — by Jessica Herrera-Flanigan on April 11, 2011

This week marks the 2nd annual National Robotics Week. The celebration,which technically started on Saturday and runs through Sunday, is designed to:

  • Celebrate the US as a leader in robotics technology development
  • Educate the public about how robotics technology impacts society, both now and in the future
  • Advocate for increased funding for robotics technology research and development
  • Inspire students of all ages to pursue careers in robotics and other Science, Technology, Engineering, and Math-related fields

In honor of the week, I thought I would highlight some of the ways that robots are contributing to our homeland security mission. According to the report First Responder, Homeland Security, and Law Enforcement Robots Market Shares, Strategies, and Forecasts, Worldwide, 2010 to 2016, the market for first responder and law enforcement robotics is expected to reach $3.7 billion by 2016.  Among the areas where robots are and could be providing services to homeland security interests are:

  • cyber-physical systems security
  • CBRNE/WMD detection
  • Explosive and bomb disarmament
  • UAVs and UGVs for surveillance and border security
  • Underground/tunnel operations
  • Robotic search and rescue
  • Underwater/Coast Guard functions
  • Perimeter security

In order to be successful in the homeland security space, experience has shown that cost-effective robotic systems must move and navigate in a physical world and interact with first responders, law enforcement, and citizens in an effective manner.

Happy National Robotics Week. If you are looking for activities in your area, they can be found here.

 

 

 

March 11, 2011

Googling Disaster – Google Crisis Response

Filed under: Catastrophes,Technology for HLS — by Arnold Bogis on March 11, 2011

Google operates a website, Google Crisis Response, that is designed to provided a wide range of news, information, and other helpful online tools in the wake of a disaster.  Among the services it provides:

  • Organizing emergency alerts, news updates and donation opportunities, and making this information visible through our web properties
  • Building engineering tools that enable better communication and collaboration among crisis responders and among victims such as Person Finder and Resource Finder
  • Providing updated satellite imagery and maps of affected areas to illustrate infrastructure damage and help relief organizations navigate disaster zones
  • Supporting the rebuilding of network infrastructure where it has been damaged to enable access to the Internet
  • Donating to charitable organizations that are providing direct relief on-the-ground

This strikes me not only as a powerful example of the potential impact of such an aggregator on disaster response and recovery, but it also could serve as a potential model of a sort for future collaboration across all facets of homeland security.

The page dedicated to the Japanese earthquake and tsunami can be found here:

http://www.google.com/crisisresponse/japanquake2011.html

February 15, 2011

There’s an app for that.

Filed under: Technology for HLS — by Christopher Bellavita on February 15, 2011

One day last year, Richard Price and a few co-workers from his agency’s information technology (IT) group were eating lunch at a deli. He heard a siren and briefly wondered where the emergency was.

The siren got louder and closer. In a few minutes, a fire engine pulled up and parked in front of the deli. That’s when Price — who is the fire chief for California’s San Ramon Valley Fire Protection District — learned the San Ramon engine was responding to a cardiac arrest call next door to the deli.

Price was on duty, in uniform, with a defibrillator in his car. One of the people he was eating lunch with was a paramedic. The emergency was a few feet away, but no one knew until the engine showed up. (Price carries a pager, but he’s typically not notified of medical emergencies.)

Cardiac arrest means the heart stops beating. Once that happens to you, you have about 10 minutes to live. After that, there is very little chance you’ll survive. Each year, over 300,000 people in the United States die from sudden cardiac arrest. Many of those people die needlessly. But even with all the advances in medicine, national survival rates are still less than 8%.

CPR (cardiopulmonary resuscitation) buys time to allow paramedics to arrive and provide advanced care. Survival rates can exceed 80% when CPR is performed and an automated external defibrillator (AED — a small machine that shocks the heart back into normal rhythm) is used in the first few minutes after a cardiac arrest.

Price was very bothered he had no idea there was someone just a few steps away from him who needed help. He promised this would not happen to him again, or to anyone else in his community. He spent the rest of that afternoon with his IT staff brainstorming and drawing diagrams on deli napkins

The result of that incident is an iPhone application — called Fire Department — that gives regular citizens the chance to provide life-saving assistance to victims of Sudden Cardiac Arrest. The application helps dispatch CPR trained citizens to cardiac emergencies occurring nearby.

Here’s how it works: Once you download the free iTunes app (available here),  you can be notified if you are near someone having a cardiac emergency.  Notifications are made — the same time paramedics are dispatched — to people who are CPR trained and who  indicated they are willing to assist during a sudden cardiac arrest emergency.

The notifications will only be made if the victim is in a public place and only to potential rescuers who are in the immediate vicinity of the emergency. The application also directs the citizen rescuers to the exact location of the closest public access AED.

Currently the application only works within the San Ramon Valley fire district, in California. But Chief Price eagerly wants to share the application “with other communities around the globe.”  The current version works on the iPhone. Price’s agency is developing versions for other smart phones.

You can see a short video explaining the app at the end of this post. You can also go to http://firedepartment.mobi for more information.

The first time I heard about the app, the public safety group I was with — while strongly supportive of the idea — had several questions about potential downsides and liabilities of the application. Price convinced the audience that his agency was entering this new dimension of citizen engagement with its organizational eyes open. They have considered the potential benefits against liabilities and are willing to accept the risks if it means saving more lives.

What is the connection between the Fire Department app and homeland security?

If homeland security has to do with “all hazards,” then surely there must be room within the enterprise for an idea that can help reduce some of the 300,000 deaths caused each year by sudden cardiac arrest.

As importantly, Fire Department is one more example of the importance of a surging technology that can sling angry birds into enclaves of thieving pigs, or overthrow a dictator, or save the life of a heart attack victim who did not have to die.

I wonder what else the technology can do?

Here’s the video that shows what the San Ramon Fire Department did with it.

December 10, 2010

Mr. Rogers’ neighborhood and the place of homeland security

Filed under: Budgets and Spending,Congress and HLS,Technology for HLS — by Philip J. Palin on December 10, 2010

             Representative Harold “Hal” Rogers (R-Kentucky).  Picture by the Associated Press

Earlier this week the House Republican Steering Committee and House Republican Conference tapped Harold “Hal” Rogers as the next Chairman of the House Appropriations Committee.  Selection of the senior member of Kentucky’s House delegation was greeted by protests from Left and Right.

Mr. Rogers previously served as both chairman and ranking-member of the Homeland Security subcommittee of the House Appropriations Committee.  He also served on the transportation and defense appropriations subcommittees. (See his official biography.)

Elected in 1980 to represent one the nation’s most economically challenged congressional districts, Mr. Rogers has been effective directing federal funds to a wide array of local wants and needs.  As such he has been assailed by the Lexington Herald-Leader (KY), New York Times, and others as the “Prince of Pork.”  This accusation headlined most of the news coverage given his pending role as chairman of the Appropriations Committee.  Mr. Rogers has joined other GOP leaders in pledging no-new-earmarks.

Constituting less than .05 percent (half of one percent) of the federal budget I perceive outrage over earmarks to be one of those symptoms that complicate diagnosis and treatment of the underlying disease.  In this particular case the pork barrel critiques of Mr. Rogers also obscure his substantive legislative record and specific interest in homeland security.

Full disclosure: from 2005 through 2007 I was Chairman of the Board of a company with a facility in Mr. Rogers congressional district.  As such I often participated in local economic development activities and met with Mr. Rogers or his staff.  During several of these discussions, homeland security was a topic.  While we would not have turned down an earmark sponsored by Mr. Rogers, the company I served did not receive such support. 

From this experience I came away with three strong impressions:

1. Mr. Rogers is an accessible and intelligent man.  He has a particular interest in homeland security and especially in how science and technology can be a force-multiplier.  In my first encounter with the Congressman he quizzed me on homeland security like the former prosecutor he is.  He knows the issues. He understands the complications. He is sophisticated in his strategic approach to homeland security challenges.  He listens.  This personal impression was confirmed by watching him question witnesses in subcommittee hearings. 

2. Mr. Rogers is consistently bipartisan in his approach.  The old saw says there are three parties on Capitol Hill: Republicans, Democrats, and Appropriators. While Mr. Rogers is certainly conservative in most ways, appropriators tend to be pragmatic and less partisan.   This approach served him well in the Minority, it is likely to mark his return to the Majority and to leadership of the full Appropriations Committee.  Chairing Appropriations has been a long-time personal ambition.  On December 31 he will turn 73.  Mr. Rogers is not looking to squander this opportunity.  Leaving a meaningful legacy is one of the more constructive motivations.

3. Like all members of  Congress and most busy professionals,  Mr. Rogers is — at least in part – a creature of his staff and contacts.  Every staff member I met was smart, competent, and wildly over-worked.  Both on Capitol Hill and back in the District what I observed was a tendency for the most narrowly self-interested people to be the most assertive and effective communicators, proposers, and planners.  On several occasions I saw senior public servants choke and defer when Mr. Rogers or his staff were entirely prepared to listen to alternatives.  In retrospect I was one of a whole host of folks who should have — could have — pushed harder on key issues of homeland security.  My hesitation — our hesitation, or cynicism, or laziness, or disdain — just offers opportunity to others who are more willing and ready claim a Congressman’s attention.

Because homeland security — the mission, not the budget per se – is important to me, I will be glad to see Mr. Rogers become Chairman of the House Appropriations Committee.   He is more interested in and better able to meaningfully engage homeland security than any other serious candidate for the leadership role.  

As always in democracies — even those with republican constitutions — the quality of leadership will reflect and largely depend on the quality of those who choose to seriously engage the process.

November 2, 2010

The perfect citizen?

Filed under: Aviation Security,Technology for HLS — by Christopher Bellavita on November 2, 2010

Today’s guest blogger is S. Francis Thorn.  Thorn teaches homeland security at a university in the United States.  He has a military and intelligence background.   This is his first post for Homeland Security Watch.

—————————————————————————————————————-

First off, the “picture” is fake. It is digitally manufactured.

This “art” is taken from a Wired.com article regarding promotional marketing for a medical imaging company.  With technology being so pervasive, dare I say promiscuous, it may be more common to see medical imaging technology -or technology in general – being cross-pollinated with other disciplines for different uses. After all, if GPS is good for munitions finding their target, it is also good for helping people find the nearest hospital.

That said, the recent concerns surrounding TSA screening techniques is an indication further discussion is necessary, especially as it relates to the pervasive use of technology and its impact on privacy. When a commercial airline pilot is willing to risk his job – during one of the worst economic periods in American history – over TSA screening techniques, this pilot may be saying ‘I’m no longer willing to ride in the back of the bus.’ And should we blame him? TSA itself has abused the technology.

Additionally, how much confidence does DHS/TSA leadership inspire when a ‘do as I say, not as I do’ security model is projected?

At a recent event at JKF International airport, where Homeland Security Secretary Napolitano was showcasing the new Advanced Imaging Technology (ATI), she apparently did not participate in demonstrating the efficacy of the technology, but instead used “volunteers.”

But let’s not skirt the main issue – which is protecting the American flyer from terrorism. Let there be no doubt, the threat is real.

In the context of threats to U.S. Airlines, there may be some common denominators – like citizenship (…or the citizenship of packages). Poor Juan Williams…

Flight 175:

Marwan al-Shehhi (United Arab Emirates)

Fayez Ahmad (United Arab Emirates)

Mohald al-Shehri (Saudi Arabia)

Hamza al-Ghamdi (Saudi Arabia)

Ahmed al-Ghamdi (Saudi Arabia)

Flight 11:

Mohamed Atta (Egypt)

Walid al-Shehri (Saudi Arabia)

Wail al-Shehri (Saudi Arabia)

Abd al-Aziz al-Umari (Saudi Arabia)

Satam al-Suqami (Saudi Arabia)

Flight 77:

Hani Hanjur (Saudi Arabia)

Khalid al-Mihdhar (Saudi Arabia)

Majid Muqid (Saudi Arabia)

Nawaf al-Hamzi (Saudi Arabia)

Salem al-Hamzi (Saudi Arabia)

Flight 93:

Ziad Jarrahi (Lebanon)

Ahmad al-haznawi (Saudi Arabia)

Ahmad al-nami (Saudi Arabia)

Saeed Alghamdi (Saudi Arabia)

Flight 63:

Richard Reid (Great Britain)

Flight 253:

Umar Farouk Abdulmutallab (Nigeria)

True, American’s can be radicalized domestically and access various transportation systems – but they can also join the U.S. Army (here and here) or get invited to speak at Pentagon luncheons….

One challenge with aviation security — as last week’s air cargo incident illustrated — is that there is a significant international aspect. The U.S. has integrated itself into the international system (i.e. globalization) to such an extent that external security threats are having an impact on internal freedoms. In the context of aviation security and its affect on privacy, the conversation regarding America’s relationship with the international community has been anemic.

For example, if individuals are traveling from overseas to kill Americans, is it appropriate to revisit programs like our visa wavier program before placing tighter security restrictions on the internal movements of American Citizens? Internationalism, in many ways, is antithetical to the American ethos.

For those curious about how America might interact with the global community, President George Washington’s Farewell Address is a necessary primer. As a suggestion, pay particular attention to Washington’s council “to steer clear of permanent alliances with any portion of the foreign world.” Essentially, with certain caveats, Washington’s prescription for preserving American freedom is for the United States to interact with the global community in the most detached manner possible.

In the context of America’s approach to aviation security or National/Homeland Security writ large (i.e. international security partnerships/alliances/collaboration), it seems a question we need to answer as a nation is – whether Washington’s council is relevant or obsolete?

For those who consider Washington’s council is obsolete – strike a pose.

June 4, 2010

A Review: Skating on Stilts: Why We Aren’t Stopping Tomorrow’s Terrorism

In 2005, Stewart Baker joined the Department of Homeland Security as Assistant Secretary of Policy for the entire Department of Homeland Security under Secretary Michael Chertoff. The position, which evolved from the Assistant Secretary for Border and Transportation Security Policy and Planning position, has the following responsibilities, according to the DHS website:

  • Leads coordination of Department-wide policies, programs, and planning, which will ensure consistency and integration of missions throughout the entire Department.
  • Provides a central office to develop and communicate policies across multiple components of the homeland security network and strengthens the Department’s ability to maintain policy and operational readiness needed to protect the homeland.
  • Provides the foundation and direction for Department-wide strategic planning and budget priorities.
  • Bridges multiple headquarters’ components and operating agencies to improve communication among departmental entities, eliminate duplication of effort, and translate policies into timely action.
  • Creates a single point of contact for internal and external stakeholders that will allow for streamlined policy management across the Department.

Baker would hold the position for the next four years, tackling a variety of issues from border and travel to cybersecurity and the Committee on Foreign Investment in the United States (CFIUS) to bioterrorism.  In his upcoming book, Skating on Stilts: Why We Aren’t Stopping Tomorrow’s Terrorism, Baker offers an intriguing view of our homeland security posture that ties back to the central theme that technology is both our savior and our enemy as it empowers not only us but our foes.  Coming from Baker, who has been described by the Washington Post as “one of the most techno-literate lawyers around,” the analysis of homeland security technology from a policy/legal prism is refreshing.  This is not a Luddite’s view of why technology harms, but an expert’s finely woven story of “how the technologies we love eventually find new ways to kill us, and how to stop them from doing that.”

A subtheme throughout the book is that information sharing, or lack thereof, has hindered our nation’s efforts to fight terrorism, especially when “privacy” has played a role.  In setting up a discussion of what led to his time at DHS, Baker recounts some of the failures leading up to 9/11, including the information sharing wall put up at the Department of Justice between intelligence and law enforcement elements of the agency, as well as challenges at the Foreign Intelligence Surveillance Court. His view is of someone who has spent time in the intelligence world as the General Counsel of the National Security Agency and as General Counsel of the Robb-Silberman Commission investigating intelligence failures before the Iraq War. The account dives into the intricacies of Justice and its overseers, as well as how bureaucracy and personalities can so easily define our government’s most sensitive policies.

The book then looks at his days at DHS and attempts to strengthen border and travel programs and policies for acronym-named programs, including Passenger Name Records (PNR), the Visa Waiver Program (VWP), Electronic System of Travel Authorization (ESTA), Western Hemisphere Travel Initiative (WHTI), and Computer Assisted Passenger Pre-Screening System II (CAPPS II),  among others.  If you have ever doubted Washington’s love of acronyms and initialisms, this read will certainly change your mind.

In evaluating efforts in the aviation space, Baker is critical of a number of groups that he deems to have stood in the way of the Department’s mission during his tenure, including the private sector, European governing bodies, bureaucrats, Congress, and privacy/civil liberties groups, all of whom he argues are all about the status quo and not open to change.  Some of his criticisms are valid while others seem to simplify the views of the various actors.  For example, in dismissing some of the tourism industry’s concerns related to travel policies, he argues that the industry did not want innovation in government security on the border. Having been in the trenches at the U.S. House Homeland Security Committee during many of these debates, I would argue that the balancing of the numerous parties’ interests and concerns was not always that simple or easy to discern, especially when assessing the right security path forward.  Some programs mentioned in the book, such as WHTI, succeeded, in part, because they were implemented once necessary infrastructure had been deployed.

His strongest concerns are reserved for privacy and civil rights advocates and the government policies they either tout or hate.  There is a great deal of skepticism for “hypothetical civil liberties” and “hypothetical privacy concerns,” without evidence of demonstrated abuses by the government. He cites numerous incidents, some of which certainly demonstrate the tension between privacy and security co-existing.  A few of the examples he uses have even been explored here at HLSWatch, including complaints about whole body imaging machines in airports.  See, e.g. The Right to Be Left Alone (October 27, 2009) and “Where are all the white guys?” (November 10, 2009). Reading the book, privacy and civil liberties supporters may find it hard to balance Baker’s call for imagination when tackling homeland security policy and decisionmaking without calling for a similar level of creative thinking when addressing how those policies and decisions will affect privacy and civil liberties.

The book goes on to describe how the Department and Administration tackled (or failed to tackle) cybersecurity and biosecurity and the differences between the approaches. In both sections, privacy and information sharing are undercurrents, though we also see some interesting discussions of such topics as patent protections, self-regulation, and the evolution of security in each of these areas.  The discussions are intriguing and provide both a history and analysis of why we are where we are on those issues.   The cybersecurity and related CFIUS discussion brought back some memories to this self-proclaimed cybergeek, including some of my first interactions with Baker when he was in private practice and I was at the Justice Department.

One last observation: while the focus on the book is obviously on the time that Baker served at the Department under Secretary Chertoff, it leaves much to the imagination of what work Secretary Ridge and his team- from their early days in the White House after 9/11 until the changing of the guard to Secretary Chertoff – undertook and how that may have contributed to some of Secretary Chertoff’s and Baker’s successes, challenges, and mindset.  In addition, despite the focus on privacy and civil liberties, there is little mention of the other DHS offices, including the Privacy, Civil Liberties, and General Counsel’s offices, who may have been engaged in many of the battles noted by Baker. The book is not lacking in detail or intrigue because of these exclusions, though I wonder how they affected the decisions of Baker and his policy team. Perhaps these items are the subject of another book for another time.

Stewart Baker provides insight into a D.C. perspective of homeland security and the struggle of a Department to tackle technology, privacy, and information sharing. The book provides some valuable lessons for those who are on the frontlines of homeland security policy as they attempt to tackle future threats. For an observer of homeland security development, Skating on Stilts: Why We Aren’t Stopping Tomorrow’s Terrorism is a must-read. The book will be released on June 15th and is available for pre-order on Amazon.com.  In the meanwhile, excerpts from the book and other missives from Baker can be found at a blog with the same name, http://www.skatingonstilts.com/.

May 12, 2010

The Big Ask

Filed under: General Homeland Security,Intelligence and Info-Sharing,Technology for HLS — by Mark Chubb on May 12, 2010

Tomorrow afternoon, I am scheduled to participate in a panel discussion on crisis management and technology at Portland State University’s Mark O. Hatfield School of Government. The event, sponsored by the campus chapter of Pi Sigma Alpha — the political science honor society, asks what role technology can or should play in helping us respond to 21st century crises.

The organizers tell me their focus remains squarely on crisis management not technology. The question in their minds is not whether technology has a place in managing crises, but how we should define that place. How, they wonder, will we know whether or not technology is helping us? From a practitioner’s perspective, this struck me as a very good question, and one that does not get asked often enough.

From where I sit, crisis management succeeds or fails on how well leaders manage its four phases, which I define as:

  • Awareness
  • Ambiguity
  • Adaptation
  • Accountability

Awareness involves signal detection, which in turn depends upon the salience of signals to those responsible for detecting and responding to them. Technology can improve signal to noise ratios, but may dull the sense of salience as people become overwhelmed by inputs, especially if those responsible for designing or operating the system lack contextual intelligence (see Nye 2008).

Ambiguity not uncertainty is the dominant feature of complex systems and their relationships with their environments, and no more so than in when these systems are in crisis. Successful decision-making in crisis situations depends not so much on the ability to gather information or even to organize it as it does on seeing the meaning or patterns hidden within it. Humans remain far better at reconciling the relevance of inconsistent, incomplete, competing, and even conflicting information than cybersystems. Ensuring such systems support the strengths of the people responsible for making decisions rather than using them to overcome weaknesses seems to me an essential step in preventing these systems from compounding rather than correcting our problems.

Most crises are adaptive not technical challenges (Heifetz & Laurie 2001). Although many crises present us with problems that require technological assistance, their hallmark remains the need to see our relationship with the problem and its environment differently from the way we did before our situation became apparent. Dietrich Dörner (1997) demonstrated that most of our problems managing adaptive challenges arises not from their scope or scale so much as our inability to see them as complex webs of interdependent variables that interact in subtle but important ways. His experiments demonstrate that we are particularly ill-equipped to manage situations in which these interactions produce exponential rather than quasi-steady changes in the situation. He further concludes, that when confronted with such problems, we have an altogether too predictable tendency to direct out attention in ways that are either too narrow and fixed or too broad and fleeting to do much good. Adaptive challenges, then, require us to keep the big picture in perspective and to engage others in its management. This is not something that cybersystems necessarily help us do better, as they engage people with a representation of the problem not its essential elements.

In the end, every crisis demands an accounting of what went wrong, and, if we are truly honest and maybe a bit lucky, what went right as well. Such judgments are as inherently subjective just as their conclusions are (or should be) intensely personal. Getting people to accept responsibility, learn from their experiences, and take steps to strengthen the relationships they depend upon to resolve crises is an innately human process. Cybersystems may help us engage one another over great distances in real time and keep records of our interactions, but they do not necessarily clarify our intentions or make it any easier for us to acknowledge the hard lessons we must learn if we are to grow.

Despite my concerns, I remain optimistic that technology can help us improve the effectiveness if not the efficiency of crisis interventions. But only if we do not ask too much of it or too little of ourselves along the way.

References:

DÖRNER, D. (1996). The Logic of Failure. New York: Basic Books.

HEIFETZ, RA & LAURIE, DL (2001). The Work of Leadership. Harvard Business Review. Cambridge, Mass.

NYE, Jr., JS (2008). The Powers to Lead. New York: Oxford University Press.

March 30, 2010

The Open Question

The open source intelligence debate took on new meaning for me on Sunday night. Shortly after 8:00 PM a loud explosion shook houses all across the east side of Portland, Oregon. What ensued afterwards provides new insights not only into how intelligence is generated, but also illustrates some of the new challenges we face in managing the collection and analysis process.

Within minutes, more than 50 calls reporting the explosion came into the local 911 center. Police and fire units responded to investigate, but found nothing to indicate an emergency. No burning or collapsed buildings, no casualties, no obvious signs of damage or disruption were evident anywhere.

Public safety officials’ prompt response to this incident, like their response to another big boom about two weeks earlier in the same area, provided little comfort though because no one could confirm what had caused the explosion. As you might expect, this opened the to door to speculation as much as it opened the door to investigation.

Within minutes subscribers to the microblogging service Twitter had invented and agreed to use the #pdxboom hashtag to track reports. Within half-an-hour, an ad hoc collaboration started on Google Maps was tracking and color-coding these reports in an effort to locate the source of the noise. And more than 20 wiseguys had even created and logged into an event marking the occasion on the social networking site Foursquare using their wireless mobile devices.

The theories spawned by these efforts ran the gamut from the serious (an earthquake boom) to the nonsensical (unicorns fighting or a house falling on a wicked witch). But the map generated by the more serious reports painted a much more compelling picture of the event. Efforts by local officials and media outlets to isolate the source by consulting the National Weather Service, the local Air National Guard fighter wing and NORAD, the U.S. Geological Survey and various utilities likewise proved fruitless.

Yet the public remained undeterred. Hundreds of people logged in over the next several hours to record their experience of the event. Before long some patterns became evident.

The next day, aided by daylight, armed with these online contributions, information from the initial 911 reports and information gathered following the previous incident, investigators located the site of the explosion along a riverbank near downtown. Fragments of a PVC pipe bomb were also recovered.

What did we learn from this incident? Well for starters, people want to be of assistance, even in a town where the police are not currently held in very high esteem due to two recent officer-involved shootings. Second, they will seek out ways to make sense of confusing experiences, which more often than not includes sharing their personal observations and perspectives in a way that gives them meaning whether or not they produce a plausible explanation. Finally, the speed with which this process of sharing information about our common experience advances will exceed anything we saw before the dawn of the Information Age.

When we speak of intelligence we often conflate its epistemic and ontological meanings. From an epistemic perspective, intelligence involves identifying what we know, filling in gaps and discovering missing elements that will help us build a coherent picture of the situation. Interpreting this picture involves another aspect of intelligence. Ontology addresses how we synthesize data by dictating the sorts of frames we apply to create a shared sense of understanding.

Neither of these approaches alone, however, answers for us the bigger and as yet unanswered and therefore open question: “What was the intention or purpose of the person who built and detonated this device?”

We often assume that analysis and synthesis will lead us to the answers we seek to teleological (thanks Phil) — as opposed to epistemic or ontological — questions. Knowing what’s on the minds of those who seek to disrupt our lives, not in some abstract ideological or theological sense, but in the very tangible sense that links their intentions and actions, might actually help us interdict such threats before they emerge. If someone figures out a way to answer this question through crowdsourcing, we could make real progress against the threats we face.

March 18, 2010

Could terrorists on the internet be the next dot com bubble?

Filed under: Radicalization,Technology for HLS,Terrorist Threats & Attacks — by Christopher Bellavita on March 18, 2010

Monday’s post about whether the internet is creating terrorists ended with the observation “Jihad Jane is likely not an anomaly but a troubling preview of the future of terrorism.”

The Los Angeles Times article by Bob Drogin and Tina Susman cited in the same post, conveys a similar concern, aided by multiple anecdotes.

I think both essays illustrate the emerging dominant view: The “Internet is making it easier to become a terrorist.”

————————

A few months ago, I attended a lecture about terrorism by David Tucker, a colleague at the Naval Postgraduate School.   In a passing comment, Tucker suggested there might be less to the perceived relationship between the internet and radicalization than meets the eye.

There was an immediate — and in some ways intellectually hostile — reaction by the audience of public safety leaders. They thought the role the internet plays in radicalization was so obvious that questioning it was akin to — well, challenging the Creation story in the Book of Genesis.

OK, that’s an exaggeration. But Tucker’s thought was not well received.

Tucker then did what he often does with such controversaries. He looked for data.

In an article I will synthesize below, Tucker found “some evidence to suggest that the web sites do aid in radicalization.” But he cautions the data is limited and may be misleading. Importantly, without a critical analysis of claims and evidence purporting to demonstrate that the internet is creating terrorists, we may end up wasting resources on the wrong problem, and ignoring potentially more effective ways to mitigate the creation of additional terrorists.

Tucker concludes his article saying there is very little evidence to support the claim “the internet is transforming how terrorists interact …. Perhaps over time, the evidence will emerge. In the meantime, we are stuck with the difficult task of focusing ‘on the social and religious networks’ from which extremists emerge if we want ‘to interrupt or fragment face-to-face recruitment.’”

————————
Below is an extended excerpt (quasi-crypto-mashup may be a better term) of Tucker’s January 2010 Homeland Security Affairs article, “Jihad Dramatically Transformed? Sageman on Jihad and the Internet.”

For this post, I have not included the footnotes or page references from the original document. Nor have I followed the normal convention about the use of ellipses. But I have emphasized parts of the article that I think are especially relevant to the internet-terrorism theme.

The complete, properly referenced, emphasized and formatted article can be found at this link.
————————

“Jihad Dramatically Transformed? Sageman on Jihad and the Internet.”

In his book Leaderless Jihad, Marc Sageman claims…that Jihad in the modern world is changing from a centrally organized and structured activity into a more dispersed, decentralized movement in which small groups self-organize to carry out attacks….

[Not] enough attention had been paid to the claims that Sageman made about the role of the internet in the development of what he calls the leaderless Jihad movement….

Sageman claims it is the internet that “has dramatically transformed the structure and dynamic of the evolving threat of global Islamic terrorism by changing the nature of terrorists’ interactions… Starting around 2004, communication and inspiration shifted from face-to-face interactions…to interaction on the internet.”

Assessing Sageman’s claim is important because if he is right, it would suggest that we switch attention and resources to combating digital recruitment. If he is wrong, then this would be a waste of resources.

Sageman says the interactivity of the internet (particularly forums and chat rooms) is changing human relationships in a revolutionary way and hence, he implicitly assumes, must be changing the way those who become extremists interact online. In support of this claim, Sageman cites one article and six terrorism cases he says show the revolutionary impact of the internet and substantiate his claim that the internet “has dramatically transformed the structure and dynamic of the evolving threat of global Islamic terrorism.”

[Tucker then argues one article and six cases is a too small a sample to make large scale generalizations. Small numbers is a persistent research problem.]

Sound generalization is always a problem in terrorism studies because terrorism is such a rare event that we seldom have a large number of well-understood cases to base our claims on. Any scientific or even simply reasonable and candid analysis of terrorism should acknowledge this problem, however, and be modest in the claims it makes.

Sageman considers … the effect of the internet on human relations in general. He states that “people’s relationships are being completely transformed through computer-mediated communications.” Sageman offers no support for this claim…. He proceeds, however, to draw conclusions about terrorism from these undocumented claims, arguing that the trust and intensity of emotion that is necessary for the sacrifices that terrorism requires can be generated online. At this point he states that “online feelings are stronger in almost every measurement than offline feelings. This is a robust finding that has been duplicated many times”

In support of this broad claim, Sageman cites one article: a review of research on the effects of the internet on social life.

[However], the article does not state that “online feelings are stronger in every measurement than offline feelings” or that this is a robust finding. It states rather that in two experiments “those who met first on the Internet liked each other more than those who met first face-to-face.” (It also reports that, depending on assumptions about the social context, interactions on the internet can be negative, displaying lack of trust, for example. ) Overall, the article offers no support for the claim that the internet is transforming social life. …

Instead of supporting Sageman’s claims, the article suggests that Sageman is wrong in stressing the transformational character of the internet. It reports that people tend to take online relationships offline into the non-internet world, for example. This suggests that whatever the internet’s advantages, individuals still prefer face-to-face social life to online social life. Indeed, the article reports that “international bankers and college students alike considered off-line communication more beneficial to establishing close social (as opposed to work) relationships.”

Other research on the social effects of the internet published since the one article that Sageman refers to does not support Sageman’s claim that the internet is transforming people’s relationships. First, the internet does not appear to be displacing people’s social activity. People who use the internet are not less likely to have other forms of social contact. Internet use “appears to expand activity engagement rather than replace previous personal channel contacts [including face-to-face contact] or media use.”

This research suggests that if Islamic extremists are replacing face-to-face contact with internet mediated contact, as Sageman claims, then they are doing something that others who use the internet are not doing.

….If research on internet use does not support Sageman, neither does the other evidence he uses, the six cases he refers to in his book.

After presenting [evidence about the six cases] in narrative form, Sageman states “this clearly shows the change from offline to online interaction in the evolution of the threat.”

In fact, it does not.

In two of the six cases that Sageman mentions, he tells us only that the terrorists got support from the internet (an inspirational document in the case of the Madrid bombing and bomb-making instructions in the case of the Cairo bombing).

There is nothing new here. Terrorists did not begin using the internet for support in 2004. The 9/11 bombers used it, as did others before them. More important, “support” is not “interaction,” and it is interaction among terrorists that Sageman says the internet has “dramatically transformed.”

Interaction did occur on the internet in the other four cases, but it also occurred face-to-face. How do we know which kind of interaction was more important? If terrorists are meeting as they have always done and then communicating online, which would be consistent with research on internet use, this does not suggest a dramatic change in terrorists’ interactions. It is important to note, then, that only in one case (the German bombing) does Sageman tell us the terrorists met first online.

The reason Sageman does not mention terrorists meeting first online in the other cases is that it did not happen. In all the other cases, it appears the terrorists met first face-to-face.  In fact, the evidence suggests terrorists tend to be friends, acquaintances or relatives, who then become radicalized and carry out an attack.

What about cases that have occurred since Sageman’s book appeared in 2008? There have been a number of cases over the past several years.  Full details on these cases are not available but we can look at what we know about a few of the more prominent ones. [And Tucker's article reviews those cases]

While sketchy and limited, none of the information we have on these recent plots suggests anything like what Sageman claims. Internet images sometimes appear to assist if not initiate the movement to extremism. Chat rooms play a role but rarely are the place terrorists first meet; face-to-face contact predominates. Mosques and other physical gathering places figure more prominently than the internet. In this limited sample, the internet appears to be a useful but by no means a transforming or even dominant means of mobilizing recruits for extremism.

In showing the complex interaction of social relations, the internet and recruiting, all of these cases show a marked resemblance to the summary description one analyst of the Madrid bombing has offered of those who carried out that attack:

It was in Mosques, worship sites, countryside gatherings and private residences where most of the members of the Madrid bombing network adopted extremist views. A few adopted a violent conception of Islam while in prison. The internet was clearly relevant as a radicalization tool, especially among those who were radicalized after 2003, but it was more importantly a complement to face-to-face interactions.

Further evidence suggesting that Sageman’s claims are wrong comes from research done on the recruitment of foreign fighters from the Middle East and North Africa.

Analysis of data captured in Iraq shows that 97 percent of a group of 177 foreign fighters met their recruitment coordinator “through a social (84 percent), family (6 percent) or religious (6 percent) connection.” Only 3.4 percent of the 177 foreign fighters mentioned the internet.

Furthermore, when countries of origin for the foreign fighters were compared to the number of internet users in those countries, “more internet users correlated with lower numbers of fighters.”

Finally, analysis shows that there is no correlation between countries that access extremist web sites and countries that produce foreign fighters. If the internet were an important tool of mobilization and recruitment, we would expect to see a correlation between accessing extremist web sites and numbers of foreign fighters.

What holds true for the Middle East and North Africa might not hold true for other places with greater general rates of access to the internet and less of a supporting social and cultural network for extremists to rely on. In these places, one night argue, the internet might be the only place where would-be radicals could find the contacts and encouragement they need to join the extremist movement. Yet what is true of the Middle East and North Africa appears to be true of North America, judging by the cases Sageman cites and the additional cases discussed above. “The internet plays a minor radicalization role…. Conversations, sermons, print and radio communication, family and social networks present foreign fighters with local justification for joining the jihad.” This finding accords with research that finds internet use tends to “activate the active;” that is, promote engagement and activity among those already inclined that way and focus attention on the local community.

One must conclude, therefore, both that Sageman offers no evidence to support his claim the internet is transforming how terrorists interact and there is little evidence elsewhere to support this claim. Perhaps over time, the evidence will emerge. In the meantime, we are stuck with the difficult task of focusing “on the social and religious networks” from which extremists emerge if we want “to interrupt or fragment face-to-face recruitment.”

————————

Is the internet really creating terrorists?  In the beginning did God really create the heaven and the earth?

Tucker’s contrarian article reminds us — it reminds me — it is important to know what we believe.  It is equally important to examine why we believe it.

February 23, 2010

How to create a resilient infrastructure in 20 years for 1 trillion dollars, create millions of jobs, transition to green transportation, and do all of this at no cost to government.

Filed under: Budgets and Spending,Infrastructure Protection,Technology for HLS — by Christopher Bellavita on February 23, 2010

The title of this post is a bit big.  But nowhere near as huge as the idea behind it.

The basic concept is to build new underground electric power transmission lines, natural gas pipelines, and telecommunication, cable TV, and Internet communication lines on rights-of-way already established by America’s 40,000 mile Interstate Highway System. The Interstate Highway System reaches nearly every part of the nation, and states own the rights-of-way along these roads. It makes sense to leverage this asset.

The idea — called the National System of Resilient Infrastructure (or NSRI) — was developed by Ted G. Lewis, at the Naval Postgraduate School.  Here are the details of this $1,000,000,000,000 idea:

—————————————-

Proposed

Electric power, energy for transportation, and telecommunications capacity are three major economic drivers for the future economy of the USA.  But these sectors are in trouble, for a variety of reasons, including NIMBY (not in my back yard), lack of investment, and lack of vision.

To overcome these barriers, stimulate the economy, and develop a resilient infrastructure for the 21st century, the author proposes a “moon shot” scale effort to build a national system of resilient natural gas, electricity, and telecommunications infrastructure along the 40,000 miles of Interstate Highway.

This 20-year, $1 trillion project would be implemented by a public-private partnership structured much like a GSE (government-sponsored enterprise), and mainly funded by the private sector. Besides creating millions of jobs, enhancing our ability to transition to clean cars, trucks, and buses, the national system would be immediately self-sustained by usage fees, and therefore profitable. It would not cost the government any money, and would have an immense impact on the economy.

Infrastructure Equals Prosperity

The Dwight D. Eisenhower National System of Interstate and Defense Highways, commonly called the Interstate Highway System (or simply the Interstate) is the largest highway system and largest public works project in the world. More importantly, it propelled the United States into a new era of prosperity. Today, virtually all goods and services are distributed via the Interstate, which is still expanding.

In the 1990s the 25-year old Internet was commercialized, stimulating economic growth so much that it produced a bubble in 2000. Yet, the federal government’s $200 million investment has already returned 100-fold on investment, after less than 20 years of growth. The future of the global economy increasingly depends on the Internet.

It is clear that relatively modest investments in infrastructure reap exponentially large returns due to economic growth, job creation, and innovation. Since ancient Rome, no nation on earth has achieved or maintained greatness, security, and prosperity, without plentiful energy, robust communications, and transportation capacity.

The economy of the 21st century will run on electrical power and Internet packets. Without these, the USA will slip into fourth or fifth place among nations.

The Challenge

The United States faces an “infrastructure challenge” and an equally big opportunity, today. The challenge is to rejuvenate our failing basic infrastructures: water, power, telecommunications, and energy.

Progress in green energy generation is stalled because of inadequate transmission capacity. Telecommunications capacity must be greatly increased to accommodate global 3D virtual reality, multi-party conferencing, and high-performance research and development in medical, environmental, and technical industries. Think of the possibilities of telemedicine piped directly into your home, or corporate meetings conducted with 100,000 participants from around the globe.

Advances in material science, bioengineering, medicine, green energy, revolutionary telecommunications, and green transportation will present great opportunity over the next 20 years to those nations prepared to capitalize on them.

These are the economic drivers of the future, but they require advanced infrastructure.

We know how to turn sunlight into electrons, but lack the distribution channel to transport electrons produced in New Mexico to markets in New York. We know how to telecommute via our computers, but lack the bandwidth for two-way, 3D telecommunication between grandmother and granddaughter across the continent. We know how to automate transportation systems to reduce auto accidents and congestion, but our highways are “dumb.”  In the next 20 years, cars will run on electricity and natural gas, but we lack the infrastructure to refuel them while achieving energy independence.

Venture capital is pent up, waiting for government to stimulate a “green economy,” but we do not currently have the market distribution infrastructure to make it possible.

We need a National System of Resilient Infrastructure (NSRI) to take advantage of opportunities that will create jobs and keep America economically strong.

The Solution

The National System of Resilient Infrastructure plan (NSRI) is designed to address two roadblocks in the way of the next stage of economic growth: NIMBY, and the enormous cost of rebuilding the power and telecommunications infrastructure of the 21st century.

NIMBY (Not-In-My-Backyard) is currently blocking many projects because people do not want power lines in their backyards. In addition, infrastructure is enormously expensive and unattractive as an investment because it does not give companies a competitive advantage. For example, the current 1 trillion dollar electrical power grid is fragile due to a lack of transmission capacity. It is also based on 1940′s technology. But who can afford to invest 1 trillion dollars to rebuild it?

NSRI proposes to avoid NIMBY by placing critical infrastructure underground. NIMBY can be avoided by building underground electric power transmission lines, natural gas pipelines, and telecommunication/CATV/Internet communication lines on rights-of-way already established by the Interstate Highway System. States already own these rights-of-way, and the Interstate Highway System reaches nearly every part of the nation. It therefore makes sense to leverage this asset even more so.

Energy, Power, and Communications infrastructure also requires storage nodes (for surge resilience), “service stations” (for distribution), and several network operation centers. The NSRI will be resilient because of its storage, security, and distributed architecture [decentralized assets].

Robust and redundant, able to transmit commodities such as Internet packets, electrons from solar farms, natural gas for future cars, trucks, and buses, and bountiful electrical power for future cyber businesses, the NSRI will be a quantum step forward for the nation and the economy.

NSRI is America’s 21st century “moon shot.”

How to Pay for It

The NSRI network would be constructed much like the Interstate Highway network, over a 20-30-year period at an estimated cost of $50 billion per year.

The author estimates it would cost $25 million/mile to build the necessary tunnels, pipes, wires, etc. The Interstate is 40,000 miles long, hence a total estimated cost of $1 trillion over 20 years.

This may seem high, but it represents 3.6% of the combined revenues of the natural gas, electrical power, telecommunications, gasoline, and broadcast industries, see Table I.

infrastructure-sector-revenues

The Interstate Highway System is “pay-as-you-go”, with 90% of the funding coming from the Federal government, and the remaining 10% from the States. In its first year of construction, 1958, total costs were $37.6 billion. By 1991, the cost was $128 billion. But these billions contributed nothing to the national debt because they were paid for by a 40 cent per gallon tax on gasoline. Title II of the Highway Revenue Act of 1956 created the Highway Trust Fund to collect and dispense funding for the Interstate System.

Similarly, the NSRI would be financed through a Trust Fund established by Congress to create and operate NSRI. The NSRI financing plan needs to be worked out in detail, but two attractive options are: Option I: GSE (Government Sponsored Enterprise), and Option II: excise taxation, similar to the model used by the Highway Revenue Act of 1956.

Ultimately, the NSRI must be self-sustaining, through revenues generated by its use. A toll fee would be charged for use of the pipelines, communication lines, storage facilities, and service stations. These fees can be based on current regulated fees charged by telephone, utility, and pipeline companies – a familiar fee structure for these industries.

Option I: GSE: Ginny Mae, Sallie Mae, Fannie Mae and Freddie Mac are GSEs, i. e., government-backed enterprises listed on stock exchanges, and therefore, investor supported. The idea here is to raise the major portion of funding from investment banks, retirement funds, and personal investors through an IPO [initial public offering]. Like a GSE, the NSRI Trust Fund would be backed by the Federal government, and at some point reach a self-sustainable level through usage fees. This model, however, would probably require temporary taxation to raise the full $50 billion needed to initiate NSRI.

Option II: Excise Taxes: The Interstate Highway System was funded by a $0.40/gallon tax on gasoline (part Federal and part State). This tax can be rolled back as expenses are replaced with usage fees. Consider this: a 3.6% excise tax on revenues shown in Table I would raise $50 billion per year. Alternatively, an additional $0.40/gallon excise tax would raise $56 billion per year.

Both options are no-cost options for the Federal Government. Both options follow the Interstate Highway model whereby States own the infrastructure. Unlike the Interstate Highway model, however, the NSRI can easily achieve sustainability through an industry-accepted fee structure.

—————————————-

Dr. Lewis can be reached at tlewis[at]nps.edu

February 4, 2010

“Check Six” — The Ethics of Anthrax Knowledge

Filed under: General Homeland Security,Technology for HLS,Terrorist Threats & Attacks — by Christopher Bellavita on February 4, 2010

Ethics question: Let’s say you are a military officer whose Oath of Office requires you to disobey an order that violates the Constitution of the United States.

You believe – for reasons that appear almost Talmudic to someone unversed in the details – that the order to be inoculated with an “unapproved anthrax vaccine” is illegal.

You go through the authorized procedures to make your case, including going to federal court.  You lose your case, but not — you believe — on the merits.

Almost one thousand service men and women refuse to receive the inoculation and are punished. You think the punishment was wrong because the order they are accused of disobeying was illegal.

You go on with your life and career.  But you learn that many more doses of the unapproved vaccine are now included in the Strategic National Stockpile.  What do you do?  Do you let it go and just keep your fingers crossed hoping the vaccine is never needed?  Do you trod further down the Quixotic path to battle even more windmills?  Do you stop trying to have the records expunged of those service members you believe were illegally punished?

A few days ago, Edward Jay Epstein  published an article in the Wall Street Journal claiming the Anthrax Attack of 2001 is still an open case.  Reading the article reminded me of the officer’s dilemma.  And a confusing story — at least to me — gets even more confusing.

In a May 2009 article the officer wrote about the vaccine, he noted the

Justice Department alleged the anthrax vaccine program’s “failing” status served as the stated motive in the 2001 anthrax letter attacks. By sending anthrax through the U.S. mail system, the perpetrator was attempting to create a situation where the government might recognize a renewed need for the vaccine.

What does all this mean?  As best as I can summarize:

  • The particular anthrax vaccine was never proven effective.
  • Allegedly, one of the U.S. scientists involved with developing the vaccine sent anthrax through the mails to create demand for more research.
  • It was that vaccine service members were required to take.
  • It is that vaccine that has been included in the Strategic National Stockpile, to be used on civilians if there is a need.

What is the ethically correct action for the officer who believes this to be the case?

——————————————————————

The guest author for today’s post is Lieutenant Colonel Tom Rempfer.  LtCol Rempfer is a distinguished graduate of the U.S. Air Force Academy, and an Air Force Command pilot, experienced in F-16s, F-117s, A-10s, and MQ-1s. His prior service included membership on the U.S. Air Force Cyberspace Task Force, as well as flight safety and operational risk management duties. He recently graduated with a homeland security master’s degree from the Naval Postgraduate School.  His thesis — available at this link — is titled: ANTHRAX VACCINE AS A COMPONENT OF THE STRATEGIC NATIONAL STOCKPILE:  A DILEMMA FOR HOMELAND SECURITY.

If you are one of the people interested in the 2001 anthrax attacks, or anything related to it, LtCol Rempfer’s thesis is worth a read.

——————————————————————
“Check six.”  Fighter pilots use this term to warn fellow aviators to look to their six o’clock position and avoid impending threats.  This is the objective of Lieutenant Colonel Tom Rempfer’s Naval Postgraduate School Center for Homeland Defense and Security thesis.

The thesis details the history of the anthrax vaccine incorporated into the Strategic National Stockpile after the anthrax letter attacks of 2001.  Prior to that time, the vaccine suffered from unapproved manufacturing changes, GAO documented potency increases, controversies over Gulf War Illness, quality control problems, threatened FDA notices of license revocation, Department of Defense (DoD) plans to replace the vaccine and recommendations by the George W. Bush administration to minimize its use in August 2001.

According to the FBI, the anthrax letter attacks in the fall of 2001 by a US Army scientist successfully rekindled demand and overcame the vaccine’s “failing” status.  Since those attacks, DoD leaders leveraged the crimes to revive the anthrax vaccine program and the Department of Homeland Security (DHS) endorsed the purchase of over $1 billion for the Strategic National Stockpile, all while federal courts declared the program illegal due to the vaccine’s lack of proper licensing.

The author’s involvement with the DoD anthrax vaccine controversy spanned the ten years preceding his Master’s degree.  He challenged the program in accordance with his Oath of Office, which demanded military orders adhere to the law.

LtCol Rempfer testified to Congress and his legal efforts to seek accountability continue with a recently filed Writ of Certiorari to the Supreme Court and an ongoing case with the Board for Correction of Military Records.  Well documented alliances in his pursuit of justice include Connecticut legislators such as Attorney General Richard Blumenthal, Senator Christopher Dodd,  former Representative Christopher Shays, and a tenacious Veteran advocate, Mr. H. Ross Perot.

The author’s ultimate goal to expunge the records for the almost one thousand Servicemembers wrongfully punished for refusing to comply with the illegal mandate serves as a backdrop, as well as his goal for proper care for those harmed by the vaccine.

The thesis describes unhealthy precedents where illegal policy, resuscitated through bioterrorism, could lead to dramatic expenditures and expanded use of the vaccine for the civilian population.  The author recommends the government resurvey the use of the vaccine for the American people and suggests the new administration should “check six” by ensuring Homeland Security Presidential Directive reviews support vaccine stockpiling in light of the proven efficacy of antibiotics, as recommended by the Centers for Disease Control (CDC).

The thesis encourages a Presidential Study and Policy Directive process to review systemic problems associated with the anthrax vaccine from a historical lens.  In doing so, DHS Secretary Napolitano protects the Obama Administration from being duped into adopting historically plagued, unnecessary and wasteful policy.

——————————————————————

Update: I received the following additional information from LtCol Rempfer a few hours after the original post:

One clarification: the current “lost” case is under appeal, so it’s not over ’till it’s over.

Plus, there’s another case at the corrections board which is an effective win.  The court agreed with my position, ordered the DoD to address records corrections, but the DoD has simply done nothing to correct the wrongs. That case is now headed back to court too, to compell the DoD to do the right thing.

These two latest cases were preceded by a complete win in a separate case.  The government was found to have violated the law because the vaccine was never licensed by the FDA until 2005, six years after they ordered service members to take the vaccine and punished those who refused.

December 16, 2009

Integrity, Validity or Security: Pick Any Two

Someone once said of the choice among quality, price, and timely delivery, “Pick any two.”  In recent years, Americans have operated under the illusion that such tradeoffs do not apply to us, at least with respect to information.  The pace of technological progress has fueled this illusion.

As individuals’ access to information has improved through the seemingly relentless convergence of information technologies, people have actually started wondering when, not if, a singularity will emerge.   Until this happens, we have to cope with the tradeoffs and their effects on democracy and trust.

As this blog’s other distinguished contributors and discussants has demonstrated on many occasions, homeland security professionals wrestle continuously with information management and technology policy issues that call upon us to balance information integrity, validity, and security.  Inevitably, these values find themselves expressed as tensions, and tradeoffs become inevitable as we seek to meet the expectations of politicians and citizens’ insatiable ‘needs to know.’

In addition to the need to know, we must now confront the ability to know.  Information and knowledge are not the same thing. Turning information into knowledge is a complex, time-consuming, and often costly process.  People in general have a poor capacity for interpreting large amounts of complex information and thus acquiring appreciable knowledge of risks, especially those far removed from their everyday experience.

This became abundantly clear to me recently, as the community where I work responded to a positive test for e. coli contamination in our drinking water supply.  Initial tests, like the one conducted here the day before Thanksgiving, had produced positive results on more than a dozen prior occasions without resulting in confirmation during subsequent testing.  This time was different though.

By the time the positive results were confirmed and the potential extent of contamination became clear, officials had to work out who needed to know what and then worried about the best way to communicate the information without provoking undue fear.  After all, they reckoned, the boil water notice issued in response to the finding in compliance with federal drinking water regulations was not itself a risk-free proposition: In other communities, more people suffered burns preparing water for consumption than suffered illness from the such contamination itself.

As word of the required actions and the city’s response to it was released to the news media and the public, feedback came in hot and fast.  Why had this notice not been issued sooner?  Why had officials relied so heavily on traditional media to get the word out?  Why had city officials not contacted water customers directly?

Those in the community asking these questions assumed they were the first to do so.  Moreover, they assumed that the answers were influenced primarily by money, technology, and administrative inertia, if not apathy or incompetence.  While cost, technical capability, and bureaucratic issues all play a role in delaying or preventing action, they are not the primary cause of officials’ concerns.  Those responsible for deciding when and how to act, including when and how to notify the public, tend to be consumed with concern for getting it right.  Herein lies the problem: A “right” response lies in the eyes of the beholder, and the public has taken a particularly jaundiced view of official actions to manage risks, especially those that involve an intersection between complex technologies and human health.

As I was digesting the very real implications of the dilemma occurring in my own community, I became aware of a report released at the beginning of October by the Knight Commission on the Information Needs of Communities in a Democracy.  The report prepared by a commission of policy and technology experts co-chaired by former United States Solicitor General Theodore Olson and Google vice president Marisa Mayer was presented to federal Chief Technology Officer Aneesh Chopra and Federal Communications Commission Chairman Julius Genachowski upon its release.

In short, the report warns of a growing information divide that threatens to undermine the foundations of American democracy. Addressing the divide, the report argues, will require coordinated effort on many fronts, and cannot be accomplished by either the government or the market acting alone.

Although improved access to technology, expanded transparency of government information, and increased commitment to engagement are all required, so too is increased literacy and numeracy – the capacity of people to appreciate information and turn it into useful knowledge.

So far, efforts to produce engagement even in some of the most creative, educated, and engaged communities through technology innovation have produced spotty results.  Open data and application development contests intended to engage private sector partners to leverage insights from public data have produced applications that do little to advance the public good.  In many cases, these applications simply make it easier for well-equipped citizens with smartphones to tell government officials they are doing a poor job responding to citizen concerns, while increasing the volume of complaints they have to deal with before they can get on with the work needed to remedy the underlying causes of what might otherwise be legitimate problems.

In other cases, applications that improve the efficiency of individual competition for consumption of public goods like parking spaces pass for innovation.  In still others, externalities clearly outweigh efficiencies by making undigested or unconfirmed information available in forms that further erode confidence in government.

In the early days of the republic, a learned man or woman of modest means could acquire a decent command of all available knowledge by applying him or herself with rigor and discipline.  Indeed, the signers of our own Declaration of Independence distinguished themselves as knowledgeable in a diverse array of subjects ranging from philosophy to law to agriculture to military strategy to engineering to commerce to religion.

Today, not one of us has any hope of achieving comparable mastery of extant knowledge.  The volume of information already in existence and the pace of new discoveries have simply become too vast, too specialized, too detailed, and too isolated from everyday experience for anyone to master regardless of mettle or means.  This does not seem to have lowered public expectations though.

In a world where people share information in real-time with one another over distances of thousands of miles and have instant access to hundreds of television channels, dozens of radio stations, and zettabytes (one zettabyte equals one billion terabytes) of data how do we overcome the illusion that information access equals knowledge?  With all of this information floating around us all the time, how do we decide what to tell people, when to tell them, and what method to use?

In the online discussion that emerged following the recent water contamination scare here, one participant in noted, “People do not trust institutions, they trust people.”  For him, at least, it was important not so much that someone had the answers to his questions, as it was that someone took responsibility for responding to his concerns.  In the absence of an official somebody, it seems anybody will do.  He, and many others, argued that the absence of official pronouncements only encouraged others to fill the void.

Not long ago, we relied upon media to do this for us.  That has changed, and media no longer have the capacity they once did to hold government accountable or to lower public expectations.  To the extent that media play an influential role in public debates these days, they are more likely to reinforce our biases than clarify positions or encourage dialogue.

It remains unclear whether social media or other technologies will bridge the gap between knowledge haves and have-nots.  If time is running out on our information illusions and our nation’s capacity to maintain trust in government and its democratic legitimacy are threatened by this growing divide, what will we make of the choice between integrity, validity, and security in the future and how will cost, quality, and timeliness influence our decisions?

November 19, 2009

Web 2.0 Technologies and Tools: A Very Brief Guide For Decision Makers

Filed under: General Homeland Security,Technology for HLS — by Christopher Bellavita on November 19, 2009

Today’s guest blogger is Glen Woodbury. The issue — pictured in the spectrum below —  is how homeland security decision makers can think about their options to handle web 2.0 (or 3.0, 4.0, etc.) technologies and tools.

( The material in this  post was developed out of discussion at the OGMA Workshop held at the NPS Center for Homeland Defense and Security 30 June – 1 July, 2009 in Monterey, CA.)

slide12

Suppress – An organization issues policies or directorates that forbid the use of a particular technology. For example, an agency issues a prohibition on their employees’ ability to access Facebook. Or an intelligence fusion center forbids its analysts from accessing social networks due to civil liberty and privacy concerns.

Defer – (ignore, abstain, dismiss) An organization decides to not use or not engage in technologies or tools even though their use is evident in their operating environment. For example, a public safety agency decides not to observe or utilize Twitter, Facebook, blogs, or other information sources even though they know that these forums are providing information to the public they are serving. Or the agency determines that engagement in a particular social networking information source would strip resources away from other requirements.

Adapt (Reactive) – An organization observes the use of technologies and tools in their environment and decides to adjust its policies and procedures in order to participate in the same technological environment. For example, a fire agency discovers that the public is relying or acting on information from Twitter sources, so it decides to enter Twitter forums and generate its own content.

Adopt (Proactive) – An organization decides, in advance of an event, to use technologies and tools that already exist and are being utilized in the public domain. For example, a police department decides, and plans for, the use of Facebook to provide information to the general public during a planned mass gathering event.

Influence – An organization deliberately influences how a particular technology or tool is being used, maintained or operated. For example, a public health agency asks a technology provider to delay scheduled maintenance on its system so that important information can be delivered to the public at a certain time. Or the same agency asks the technology company to change a characteristic of its technology to better serve the requirements of the agency.

Design – An organization determines requirements that might be served by new technologies and tools and seeks a design and production of a system to serve those needs. For example, an emergency management agency desires a new way to hold collaborative planning discussions in a virtual environment and engages with a technology provider to build the product.

« Previous PageNext Page »