For weeks now, the Los Angeles Fire Department has been under intense scrutiny for errors in its reporting of response time data. Previously reported figures suggested the department was doing pretty well meeting its response-time targets despite budget cutbacks that affected service levels. However, it later came to light that the methodology for calculating response-time data presented to elected officials was flawed. These flaws included a failure to present the results of models as predictions rather than actual response data and errors in the way response times were measured and performance against targets reported.
When it became clear that the fire department’s responses had been affected by changes in staffing and service levels, the media and elected officials began asking some difficult questions. Unfortunately, most of these questions were precisely the wrong ones.
It’s one thing to ask whether the department is meeting response time targets. It’s another thing entirely to ask whether these targets are meaningful indicators of service performance. Errors in reporting could affect the answer in either case, but the effects would be very different.
In Los Angeles, it’s now clear that the fire department does not meet its stated targets. It should be equally clear that these targets are arbitrary and all but meaningless in the vast majority of cases. (In other words, the Los Angeles Fire Department remains a world-class outfit despite the cuts.) Unfortunately, the latter fact has dawned on very few people despite abundant evidence that the unwelcome answer to the first question is due in large measure to the growing dependence of the community on fire department responses to many low-priority and even non-emergency events where time matters very little if at all.
The expectation that fire departments are there to deal with anything unwelcome or untoward that people encounter when no one else is there to help them has not come about by accident. Firefighters love to be loved (and needed) and have been all too willing to answer these calls without regard to the costs. The controversy in Los Angeles suggests these costs are not just fiscal. The opportunity cost of attending so many low-priority and non-emergency calls is clear: The system cannot meet the performance expected when genuine emergencies arise.
For firefighters the answer is simple: expand capacity. For administrators, that’s simply not an option in these austere times. Sadly, elected officials too rarely take responsibility for the fact that you cannot please all of the people all of the time.
Nearly everybody these days accepts the adage that when it comes to performance, speed, quality and cost matter, but you can only have two of these. This is especially true when it comes to emergency services. The problem is that our expectation of which two we are willing to accept varies a lot depending upon the circumstances.
When it comes to situations that clearly are not time critical, time should not matter. But it does when you confuse it with an indicator of quality. And almost every fire department does just that because they have no idea how to measure quality but they can measure time.
Fire departments are inherently inefficient operations. They operate on two basic premises: 1) no one should ever have to wait for a response and 2) every response should be treated as an emergency until proven otherwise. These two assumptions combine with pernicious effect when it comes to the way we handle 911 calls. And let’s be clear about this 911 is no longer shorthand for “emergency.” These days, about 40 percent of all calls coming into public safety answering points are misdials and many more involve queries that have nothing whatsoever to do with police, fire or emergency medical services.
Rather than take a few seconds to find out what’s really going on, most agencies insist that dispatching decisions get made within 60 seconds of call receipt regardless of circumstances. This was once a relatively simple affair because it relied on the intuition and judgment of experienced call-takers and dispatchers who made the call based on relatively simple heuristics. When they were equipped with little more than a telephone and a radio console, the required action took little time or effort. Not so today. These days we have two, three or even more layers of technology between call-takers and dispatching decisions. Even after a dispatch is initiated, we have even more layers of technology through which signals alerting stations and conveying information about the call must pass before responders get the message.
These interventions have made it possible to track the most minute details about each and every incident. But they have not made the process of delivering emergency service faster or more efficient. In fact, it’s just the opposite. In many instances we have become unwitting slaves to the planned obsolescence of the technologies themselves and helpless victims of the technological hurdles involved in marrying up diverse platforms supplied by competing vendors procured by different agencies.
When fire departments talk about standards of cover – the five dollar phrase for these response targets – they rarely acknowledge the fatal flaws in the logic (or lack thereof) they apply to deciding what matters. These standards, often derived from flawed analogies to fire growth curves and the onset of brain death following cardiac arrest, were easy to meet using legacy technologies that were far simpler and more efficient. But now we must contend with the expanded and often unrealistic performance expectations arising from our inability or unwillingness to make the simplest distinctions about the services we provide.
Adopting arbitrary standards of cover, like 60 second call processing times and five minute travel times, may allow us to direct the blame at others when we miss the mark overall, but it does almost nothing to solve the problem when performance falls short.
When time matters, it matters a lot and cost is not much of an issue. The good news is that getting these decisions right involves little more than giving people permission and encouragement to treat very different situations differently. The quality of the outcome always depends on how well the people perform, and when the way they use the technology becomes an impediment to what they are trying to accomplish we have the tail wagging the dog.
The single biggest factor affecting our success may well our willingness to recognize that what people experiencing or witnessing an event do before we arrive matters much more than how long it takes us to get there. Sure, response time and the quality of care help, but not if people wait too long to seek help or take no action to mitigate the consequences before we get there.
The best case in point may very well be right in my back yard: King County and Seattle, Washington have managed to achieve a 50 percent survival rate from witnessed cardiac arrest involving shockable arrhythmias (ventricular fibrillation and ventricular tachycardia). Sure, we were among the first communities to establish a fire-based advanced life support paramedic program. Yes, we send first-response EMTs on fire-based units to every call, and often as many as 10 responders to confirmed cardiac arrest calls. But the factor that has probably made the most difference has been the frequency and quality of bystander CPR.
Other programs send paramedics on the first due fire engines whenever possible. We do not. Some use dispatchers to give CPR instructions over the phone. We do too. But we do something even more important: We get out in the community and teach everyone willing to give us a few minutes of their time how to save a life.
Don’t get me wrong. People here still worry about response times. But they have a lot less reason to do so because we have nothing to hide: We rely on the public as much as they rely on us, and we’re proud of it.