Monday, June 28, 2010

A substitute for "causes" in accident investigations

A source of continuing confusion, debate and complaints among investigators is the the determination of "cause" in its various forms as a central investigation output. A recent presentation to the International Society of Air Safety Investigators (ISASI forum April-June 2010, p5) raises the question "The Accident Cause Statement-Is It Beyond Its Time?" It is the latest in a long series of papers that challenge cause statements in investigation reports, mine among them.

One of the shortcomings of previous challenges has been the lack of an alternative to replace the cause statements. If not cause, then what? In a 2007 paper presented to the European Safety and Reliability Data Association, I offered the first glimmer of a substitute, commenting as an aside about an example of an alternative lessons learning system design that it made cause determination moot. It wasn't until later work that I recognized the full significance of that aside.

That recognition came to fruition as I was preparing a presentation for the 16th HPRCT conference in Baltimore this month, dealing with the shortest data pathway from the generation of source data during an incident to the demonstrated performance improvement in an activity.

In that presentation, the first step I described was how to transform source data generated during any experience into improved performance, by using a new standardized structure for building blocks documenting that data. The second was an analysis structure for organizing, coupling and analyzing those building blocks to show dynamic interactions during the experience. The third was a structure for reporting lessons to be learned from those analyses as input-output behavior sets that could be overlaid onto existing operational behavior patterns. The presentation also included the benefits that could be realized, and a plan for incrementally implementing the changes into an existing operational improvement processes.

The key change is to shift from an accident causation model to an input-output process model as the basis for investigating the accident/incident phenomena. The process which produced the outcomes precipitating the investigation can be described verifiably and explained in terms of behavioral inputs and outputs among people, objects and energies. Those interactions are represented as behavior sets, as explained in the paper and presentation. Anyone insisting on selection of causes can do so according to whatever criteria they desire from the interacting behaviors described.

The paper and presentation are posted in the lessons learned research section of the www.iprr.org web site, with a list noteworthy ideas introduced, abstracts and documents. They also can be downloaded directly from the web site home page.

Saturday, February 20, 2010

Investigation process research quality

I was hesitant about posting this message, but having witnessed the problem on other occasions, decided it is important enough to do so.


My interest in accident investigation process research dates back to the early 1970s, when anomalies, differences, disputes and contradictions among investigators and investigations, observed during my participation in NTSB investigations in each mode, led to my efforts to see if I could harmonize the processes. That prompted me to document them, question and understand why each existed, and try to learn what might be done to overcome the differences. I have seen and studied an increasing number of research reports comparing various investigation methodologies, processes and practices, and written extensively about the topic. What finally motivated this blog was publication of a recent report comparing analysis results using an established investigation process with a systemic process under development, and the claimed benefits of the developing systemic process for accident investigation and analysis. I was knowledgeable about both processes. My review of the research report generated concerns about the research bias evidenced, and the the effects widely read biased comparative research results can have on future investigations and analyses.


I attribute the problem to bias rather than poor scholarship. Evidence of authors' bias was readily discernible in the authors' limited search for and data source selection for information about one of the two processes; the difference in the effort invested in applying the two processes to a selected incident; differences in the nature and scope of the samples used to illustrate differences produced by each process; the serious misrepresentation of capabilities of one process because of a disregard for some of its rules during its application; unbalanced critical comments about the two processes; and the close personal involvement of the authors with the process they concluded was better.


Because of the exhibited bias favoring one of the processes, and the resultant under-analysis and misrepresentations of the "also ran" process, false impressions about the nature, application and results of the "also ran" process are reasonably predictable among uncritical or less knowledgeable readers. That can be expected to unfairly diminish the perceived value and potential use of the also-ran in the intellectual, social and economic marketplace. And that, in turn, could discourage use of a process that, fairly compared, might be superior to the favored process, to the detriment of safety.


Maximizing objectivity in comparative accident investigation and analysis research is a continuing challenge that apparently merits more attention than it has gotten, based on this reported research and previous documents. My criticisms offer a starter list for improvement. Other criteria for detecting or avoiding bias would be welcomed.

Wednesday, January 6, 2010

Safety Metrics

Recent developments promising to make possible the objective assessment of accident prevention performance have stimulated my adrenaline. The development emerged from dialogues with my co-author of a paper presented at an ISASI Seminar in September 2009, during our development of our presentation for the Seminar. The concept is based on the use of "behavior sets" generated during accidents, incidents or other experiences with undesired outcomes, and the use of those behavior sets in future investigations, ongoing activities and new systems analyses.

No two accidents are identical. However, some interactions during an accident may be similar or identical to those experienced in previous accidents. The similarities can be found by analyzing the interactions as behavior sets - or interactions among the people, objects and energies involved. In future investigations, the presence of a previously experienced (and reported) behavior set indicates the accident is a "retrocursor" accident, or one with precedents which should not have been repeated - e.g., should have been prevented. Such presence offers a retrospective assessment of the lessons learned practices designed to prevent accident recurrence, or retrospective prevention performance measurement.

Behavior sets offer opportunities for examining ongoing operations to find the same or similar patterns of behavioral interactions in those operations, and if found, to change those behaviors and reduce the risk of future accidents. That's prospective accident prevention action, measured by the discovery and modification of risk raising behavior patterns. They also offer unambiguous operations audit check-off items, and guidance for change analysts and procedures developers and reviewers.

A third prevention opportunity is to review planned operations during design stages to determine whether they might contain planned or potential interactions identified in previous accidents. Their presence or absence in future operations then offers a measure of the effectiveness of the predictive analyses in preventing future accidents. I see the availability of behavior sets having a significant impact on future risk assessments, because the offer an unambiguous multi-dimensional definition of risk raisers at the lowest level of abstraction, as contrasted with the uni-dimensional nature of a factor, error, failure, hazard or cause of any kind.

When combined with the input-output data structure suggested in a 2007 paper, it seems to me that it's worth pursuing.

The ideas were first proposed in our ISASI paper and presentation that can be found at
http://www.iprr.org/research/llprojcontents.html, items 9-11.