Tuesday, January 22, 2013

New perspective for accident investigations

This describes how I became convinced that accident investigations should adapt to end users' needs rather than forcing users to adapt to investigators practices, as is now the case.

Since 1971, I have devoted considerable time and energies inquiring into problems with accident investigation concepts, principles and practices, their sources and possible resolution, with modest successes that have been published. Since late 2006, I have been engaged in examining accident investigations have limited success, as demonstrated by continuing streams of accidents. That involved an examination of lessons learned practices flowing from accident investigation work products. Impressed by Werner and Perry's findings about why investigation lessons learned data were not used much by end users, I - fortuitously - thought it might be worth while to look at end users of investigation outputs, what they did with those outputs and how well they satisfied end users' needs. That led to a series of papers and presentations and tutorials, dealing with the relationship between investigation practices and their role lessons learning systems. This resulted in a series of papers and presentation, discussing relevant problems, challenges and opportunitiess. As the thinking in support of the papers evolved, new insights kept popping up. About a year ago, I pulled all the papers on this topic together and posted them on my pro bono web site at http://www.iprr.org/ research/llprojcontents.html, along with a list of the new insights associated with each work.

While recently viewing the list of insights in connection with another paper on lessons learning system design strategies I am working on, one of those "aha moments" occurred to me. It dawned on me that all the substantive changes to investigation processes that were emerging flowed from a significant shift in my view about whom the research should serve. I had been looking through the wrong end of the microscope. I was looking through the end that focused on investigators concepts, principles and practices, rather than on satisfying the end users' needs. I realized my previous research effort was aimed at changes that would help investigators improve their investigations within the existing causation models, with their causal determination and recommendation framework. Adding users' needs to my inquiries led to my gaining new appreciation for their need for actionable behavioral information which they could introduce into their activities with minimal effort and intervening steps, so I tried to push investigation changes in that direction. My findings led to my challenging the causal determination/safety recommendation investigation paradigm for present investigation practices and how they supported - or rather didn't support - a timely, effective and efficient lessons learning system that produced what all potential end users could and would implement.

Without realizing it, my findings had shifted my initial research perspective, from serving the investigation community to serving the end user community, with some startling consequences, the most significant of which is a clear need to change current accident investigation causation and recommendation framework to one serving end users' needs with timely, accessible and readily assimilable behavioral information.

If improved lessons learning from accidents is to be achieved, I am convinced that end users' wanting to reduce risks using investigation outputs have to stop compromising their needs by tolerating what investigators now provide them, and demand timely actionable behavioral information from investigations.


LB 11/13/10




Monday, June 28, 2010

A substitute for "causes" in accident investigations

A source of continuing confusion, debate and complaints among investigators is the the determination of "cause" in its various forms as a central investigation output. A recent presentation to the International Society of Air Safety Investigators (ISASI forum April-June 2010, p5) raises the question "The Accident Cause Statement-Is It Beyond Its Time?" It is the latest in a long series of papers that challenge cause statements in investigation reports, mine among them.

One of the shortcomings of previous challenges has been the lack of an alternative to replace the cause statements. If not cause, then what? In a 2007 paper presented to the European Safety and Reliability Data Association, I offered the first glimmer of a substitute, commenting as an aside about an example of an alternative lessons learning system design that it made cause determination moot. It wasn't until later work that I recognized the full significance of that aside.

That recognition came to fruition as I was preparing a presentation for the 16th HPRCT conference in Baltimore this month, dealing with the shortest data pathway from the generation of source data during an incident to the demonstrated performance improvement in an activity.

In that presentation, the first step I described was how to transform source data generated during any experience into improved performance, by using a new standardized structure for building blocks documenting that data. The second was an analysis structure for organizing, coupling and analyzing those building blocks to show dynamic interactions during the experience. The third was a structure for reporting lessons to be learned from those analyses as input-output behavior sets that could be overlaid onto existing operational behavior patterns. The presentation also included the benefits that could be realized, and a plan for incrementally implementing the changes into an existing operational improvement processes.

The key change is to shift from an accident causation model to an input-output process model as the basis for investigating the accident/incident phenomena. The process which produced the outcomes precipitating the investigation can be described verifiably and explained in terms of behavioral inputs and outputs among people, objects and energies. Those interactions are represented as behavior sets, as explained in the paper and presentation. Anyone insisting on selection of causes can do so according to whatever criteria they desire from the interacting behaviors described.

The paper and presentation are posted in the lessons learned research section of the www.iprr.org web site, with a list noteworthy ideas introduced, abstracts and documents. They also can be downloaded directly from the web site home page.

Saturday, February 20, 2010

Investigation process research quality

I was hesitant about posting this message, but having witnessed the problem on other occasions, decided it is important enough to do so.


My interest in accident investigation process research dates back to the early 1970s, when anomalies, differences, disputes and contradictions among investigators and investigations, observed during my participation in NTSB investigations in each mode, led to my efforts to see if I could harmonize the processes. That prompted me to document them, question and understand why each existed, and try to learn what might be done to overcome the differences. I have seen and studied an increasing number of research reports comparing various investigation methodologies, processes and practices, and written extensively about the topic. What finally motivated this blog was publication of a recent report comparing analysis results using an established investigation process with a systemic process under development, and the claimed benefits of the developing systemic process for accident investigation and analysis. I was knowledgeable about both processes. My review of the research report generated concerns about the research bias evidenced, and the the effects widely read biased comparative research results can have on future investigations and analyses.


I attribute the problem to bias rather than poor scholarship. Evidence of authors' bias was readily discernible in the authors' limited search for and data source selection for information about one of the two processes; the difference in the effort invested in applying the two processes to a selected incident; differences in the nature and scope of the samples used to illustrate differences produced by each process; the serious misrepresentation of capabilities of one process because of a disregard for some of its rules during its application; unbalanced critical comments about the two processes; and the close personal involvement of the authors with the process they concluded was better.


Because of the exhibited bias favoring one of the processes, and the resultant under-analysis and misrepresentations of the "also ran" process, false impressions about the nature, application and results of the "also ran" process are reasonably predictable among uncritical or less knowledgeable readers. That can be expected to unfairly diminish the perceived value and potential use of the also-ran in the intellectual, social and economic marketplace. And that, in turn, could discourage use of a process that, fairly compared, might be superior to the favored process, to the detriment of safety.


Maximizing objectivity in comparative accident investigation and analysis research is a continuing challenge that apparently merits more attention than it has gotten, based on this reported research and previous documents. My criticisms offer a starter list for improvement. Other criteria for detecting or avoiding bias would be welcomed.

Wednesday, January 6, 2010

Safety Metrics

Recent developments promising to make possible the objective assessment of accident prevention performance have stimulated my adrenaline. The development emerged from dialogues with my co-author of a paper presented at an ISASI Seminar in September 2009, during our development of our presentation for the Seminar. The concept is based on the use of "behavior sets" generated during accidents, incidents or other experiences with undesired outcomes, and the use of those behavior sets in future investigations, ongoing activities and new systems analyses.

No two accidents are identical. However, some interactions during an accident may be similar or identical to those experienced in previous accidents. The similarities can be found by analyzing the interactions as behavior sets - or interactions among the people, objects and energies involved. In future investigations, the presence of a previously experienced (and reported) behavior set indicates the accident is a "retrocursor" accident, or one with precedents which should not have been repeated - e.g., should have been prevented. Such presence offers a retrospective assessment of the lessons learned practices designed to prevent accident recurrence, or retrospective prevention performance measurement.

Behavior sets offer opportunities for examining ongoing operations to find the same or similar patterns of behavioral interactions in those operations, and if found, to change those behaviors and reduce the risk of future accidents. That's prospective accident prevention action, measured by the discovery and modification of risk raising behavior patterns. They also offer unambiguous operations audit check-off items, and guidance for change analysts and procedures developers and reviewers.

A third prevention opportunity is to review planned operations during design stages to determine whether they might contain planned or potential interactions identified in previous accidents. Their presence or absence in future operations then offers a measure of the effectiveness of the predictive analyses in preventing future accidents. I see the availability of behavior sets having a significant impact on future risk assessments, because the offer an unambiguous multi-dimensional definition of risk raisers at the lowest level of abstraction, as contrasted with the uni-dimensional nature of a factor, error, failure, hazard or cause of any kind.

When combined with the input-output data structure suggested in a 2007 paper, it seems to me that it's worth pursuing.

The ideas were first proposed in our ISASI paper and presentation that can be found at
http://www.iprr.org/research/llprojcontents.html, items 9-11.

Tuesday, July 21, 2009

Lessons learning system

"Lessons learned" as a concept has been around a long long time, and has been examined often in the past. It is one of the underlying reasons for doing accident investigation and incident investigations, and other investigations of all kinds. Yet when it comes to using the lessons, the inquiry rates are very modest, and reasons for not using them are numerous.

To find what knowledge has been gained about lessons learned in accident investigations, and who may have studied the topic, a google search seemed like a good starting point. For a "lessons learned" search, Google produced around 20 million hits. Using "lessons learned" with accident or investigation produced 2,420,000 hits. That's a lot of lessons learned. Lessons learned process is another term used frequently in connection with these activities; an advanced Google search for "lessons learned process" produced around 300,00 hits in many diverse fields. When we narrow that search even further by looking for accident or investigation related lessons learned process, an advanced search produced 2810 hits. Now if we want to analyze those processes using a system analysis approach, a search for accident or investigation "lessons learned system" produced a slightly more manageable 645 hits, but that still included many hits not related to accident investigations. To try to narrow the search further, "lessons learning system" and accident or incident we entered, resulting in 5 hits with Google, and 5 with Yahoo. -{mostly my works.)

Using another tack, the 2810 and 645 hit lists were scanned to find references to organizations that had lessons learned process. Many do - some of which derive lessons from investigations. When the 645 hits were scanned, the tenor of the references listed observed to be focused on the lessons, rather that the full breadth and depth of the learning process, from the time data from which lessons are developed until changes based on the lessons have produced expected results. A few exceptions were noted: when major accident processes are examined thoroughly, as in a Challenger space shuttle accident investigation or in the Bunsfield tank farm explosions, calls for improvement in lessons learned processes sometimes occur.

Possibly more significantly goals, criteria, metrics, ouutput specifications, quality assurance and other properties of lessons were noteworthy by their ambiguity or absence, with most focusing on step by step actions to process the lessons that were inputed to the system.

How then can we reasonably expect lessons learning systems to be optimized, or even improved. This is what we are exploring. Some progress has been made and reported.


Contributions of criteria or suggestions for lessons learning system improvement - or critiques of some of the ideas are invited.

Friday, January 23, 2009

Accident statistics: valid?

Statistical analyses of accidents has troubled me for many years. One of my main objections was that statistical correlation does not equate to "cause." Very recently an even more significant insight occurred to me during a discussion with a colleague, Ira Rimson, as we were discussing the system descriptions as inputs used by safety analysts. We were discussing the description requirements to help safety analysts understand the dynamics of the systems they were being asked to analyze, and how those dynamics should be described. Our experience suggested that the descriptions presently offered were fragmented elements at best. Extending that notion to the descriptions of accidents led to a new concern involving the idea of "sampling" accident dynamics during accident investigations. What should be sampled and how should the samples be documented?

Digitization of music provides an instructive analogy.

To digitally reproduce music, states of a musical work are sampled at various rates, typically ranging from 22k to 44k or more samples per second. The lower the sampling rate, the less faithfully the music is reproduced. Carried to its logical conclusion, a single sample of a song is useless if one wants to "hear" the data as music.

Think of the production of music or a song as a process, requiring the dynamic interactions of the people, and instruments, and the pitch, frequency and other constantly changing relationships necessary to produce the notes as the song progresses from beginning to end. To digitize the music the state of each of these attributes must be sampled frequently as the song progress. Digital video captures images sampling action in frames per second. Same idea: the greater the number of samples, the greater the fidelity of the reproduction.

Apply similar thinking to the descriptions of systems and their operation, and how such information is presented to system safety analysts who are expected to find hazards in a system and predict system aberrations. How well is it possible to meet those expectations if the descriptions of the processes they are to analyze has insufficient samples of the "music" the system operation might produce?

Now, think also of accidents as processes, involving dynamic interactions of the people, objects and energies needed to produce the accident as it progresses from its beginning to its outcome over time. What data do we sample to predict and capture a description of that process? That question has been answered after the fact in part in some activities, such as aircraft system operations captured with digital flight data recorders and analog cockpit voice recorders. But before the accidents happen, how can you predict them with insufficiently sampled process description?

The question seems worthy of more exploration.

Saturday, June 7, 2008

Lessons Learning Systems

Accident investigation practices that produce "lessons learned" are coming under increasing scrutiny. Issues surfacing as a result of this scrutiny include how to streamline and maximize lessons learned development, documentation, dissemination, accessibility, internalization and feedback. Serious challenges to many aspects of current lessons learned systems are emerging from this scrutiny, including aspects such as the
  • cause-based framework of present lessons learnedsystems
  • orientation of accident investigations relative to lessons learned
  • focus of lessons learned efforts,
  • derivation of lessons learned during accident investigations
  • maximization of the number of lessons learned
  • language and structure of lessons learned documentation,
  • latency in lessons learned cycles,
  • context available with lessons learned,
  • harmonizing of lessons learned data derived from accidents and other mishap investigation with lessons derived from other sources in an organization,
  • data density of lessons learned outputs,
  • breadth of the accessibility of lessons learned,
  • internalization of lessons learned when accessed,
  • monitoring of changes in activities attributable to lessons learned in mishaps,
  • lessons learned life span,
  • lessons learned obsolescence, and
  • strategies for improving lessons learning systems performance
See http://www.iprr.org/research/llset/SSTS08_tutorial.pdf for a presentation about this topic. And chip in your two cent's worth if you have some useful thoughts.