Tuesday, January 22, 2013
Monday, June 28, 2010
Saturday, February 20, 2010
I was hesitant about posting this message, but having witnessed the problem on other occasions, decided it is important enough to do so.
My interest in accident investigation process research dates back to the early 1970s, when anomalies, differences, disputes and contradictions among investigators and investigations, observed during my participation in NTSB investigations in each mode, led to my efforts to see if I could harmonize the processes. That prompted me to document them, question and understand why each existed, and try to learn what might be done to overcome the differences. I have seen and studied an increasing number of research reports comparing various investigation methodologies, processes and practices, and written extensively about the topic. What finally motivated this blog was publication of a recent report comparing analysis results using an established investigation process with a systemic process under development, and the claimed benefits of the developing systemic process for accident investigation and analysis. I was knowledgeable about both processes. My review of the research report generated concerns about the research bias evidenced, and the the effects widely read biased comparative research results can have on future investigations and analyses.
I attribute the problem to bias rather than poor scholarship. Evidence of authors' bias was readily discernible in the authors' limited search for and data source selection for information about one of the two processes; the difference in the effort invested in applying the two processes to a selected incident; differences in the nature and scope of the samples used to illustrate differences produced by each process; the serious misrepresentation of capabilities of one process because of a disregard for some of its rules during its application; unbalanced critical comments about the two processes; and the close personal involvement of the authors with the process they concluded was better.
Because of the exhibited bias favoring one of the processes, and the resultant under-analysis and misrepresentations of the "also ran" process, false impressions about the nature, application and results of the "also ran" process are reasonably predictable among uncritical or less knowledgeable readers. That can be expected to unfairly diminish the perceived value and potential use of the also-ran in the intellectual, social and economic marketplace. And that, in turn, could discourage use of a process that, fairly compared, might be superior to the favored process, to the detriment of safety.
Maximizing objectivity in comparative accident investigation and analysis research is a continuing challenge that apparently merits more attention than it has gotten, based on this reported research and previous documents. My criticisms offer a starter list for improvement. Other criteria for detecting or avoiding bias would be welcomed.
Wednesday, January 6, 2010
Tuesday, July 21, 2009
Friday, January 23, 2009
Saturday, June 7, 2008
- cause-based framework of present lessons learnedsystems
- orientation of accident investigations relative to lessons learned
- focus of lessons learned efforts,
- derivation of lessons learned during accident investigations
- maximization of the number of lessons learned
- language and structure of lessons learned documentation,
- latency in lessons learned cycles,
- context available with lessons learned,
- harmonizing of lessons learned data derived from accidents and other mishap investigation with lessons derived from other sources in an organization,
- data density of lessons learned outputs,
- breadth of the accessibility of lessons learned,
- internalization of lessons learned when accessed,
- monitoring of changes in activities attributable to lessons learned in mishaps,
- lessons learned life span,
- lessons learned obsolescence, and
- strategies for improving lessons learning systems performance