|
The Human Factors Analysis and Classification System—HFACSCover
and Documentation HFACS and Wildland Fatality InvestigationsHugh Carson wrote this article a few days after the Cramer Fire Bill Gabbert wrote this article following the release of the Yarnell Hill Fire ADOSH report A Roadmap to a Just Culture: Enhancing the Safety EnvironmentCover
and Contents Rainbow Springs Fire, 1984 — Incident Commander NarrationIntroduction U.S. Forest Service Fire Suppression: Foundational DoctrineTools to Identify Lessons LearnedAn FAA website presents 3
tools to identify lessons learned from accidents. |
A Roadmap to a Just Culture:
|
1 The example is from airborne environment but it may well be the case for ATC community. |
Such punishment causes two problems: First, the confusing maintenance manual will still be in use in the system, potentially confusing other mechanics. Second, and far worse, is that such punishment, in effect, "shoots the messenger." By shooting a messenger, management or the government effectively guarantees that they will never again hear from any other messengers. This, in turn, guarantees that those problems in the "unreported occurrences" part of the pyramid will remain unreported – until, of course, they cause an accident or incident, whereupon the testimony at the accident hearing, once again, will be that, "We all knew about that problem."
One aviation regulator, the UK CAA, announced some years ago, that absent egregious behavior, e.g., intentional or criminal wrongdoing, they would not shoot the messenger, and encouraged their airlines and other aviation industry employers to take the same approach. That is a major reason why the UK has some of the world's leading aviation safety information sharing programs, both government and private. The type of facilitating environment created by the UK is essential for the development of effective aviation safety information collection and sharing programs. Similarly, British Airways gave assurances that they would also not “shoot the messenger” in order to get information from pilots, mechanics, and others for BASIS. Many other airlines around the world are concluding that they must do the same in order to obtain information they need to be proactive about safety.
Significant progress has also been made on this issue in the U.S. In October 2001, the FAA promulgated a regulation, modeled on the UK example, to the effect that information collected by airlines in FAA-approved flight data recorder information programs (commonly known as Flight Operations Quality Assurance (FOQA[2]) programs will not be used against the airlines or their pilots for enforcement purposes, FAA 14 CFR part 13.401, Flight Operational Quality Assurance Program: Prohibition against use of data for enforcement purposes.
2 FOQA programs complement Aviation Safety Action Programs (ASAP), announced in January 2001 by the US President, in which airlines collect reports from pilots, mechanics, dispatchers, and others about potential safety concerns. |
Concern that the information will be used to pursue criminal fines and/or incarceration. The threat of criminal proceedings tends to deter a reporter from submitting safety information that may be used against them.
A major obstacle to the collection and sharing of aviation safety information in some countries is the concern about criminal prosecution for regulatory infractions. Very few countries prohibit criminal prosecutions for aviation safety regulatory infractions. “Criminalization” of accidents has not yet become a major problem in the U.S., but the trend from some recent accidents suggests the need for the aviation community to pay close attention and be ready to respond.
Concern that the information will increase exposure to monetary liability in civil accident litigation. The threat of civil litigation tends to deter a reporter from submitting safety information that may be discoverable in litigation and possibly used against them in civil action.
One of the most significant problems in the U.S. is the concern that collected information may be used against the source in civil accident litigation. Significantly, the thinking on this issue has changed dramatically in recent years because the potential benefits of proactive information programs are increasing more rapidly than the risks of such programs. Until very recently, the concern was that collecting information could cause greater exposure to liability. The success stories from the first airlines to collect and use information, however, have caused an evolution toward a concern that not collecting information could result in increased exposure.
This evolution has occurred despite the risk that the confidentiality of information collection programs does not necessarily prevent discovery of the information in accident litigation. Two cases in the U.S. have addressed the confidentiality question in the context of aviation accidents, and they reached opposite results. In one case, the judge recognized that the confidential information program would be undermined if the litigating parties were given access to the otherwise confidential information. Thus, he decided, preliminarily, that it was more important for the airline to have a confidential information program than it was for the litigating parties to have access to it (this refers to the air crash near Cali, Colombia). In the other case, the judge reached the opposite result and allowed the litigating parties access to the information (this refers to the air crash at Charlotte).
As this issue will be decided in future cases, in aviation and other contexts, hopefully the courts will favor exempting such programs from the usual -- and normally desirable -broad scope of litigation discovery. However, present case law is inconsistent, and future case law may not adequately protect the confidentiality of such programs. Thus, given the possibility of discovery in accident litigation, aviation community members will have to include, in their decision whether to establish proactive information programs, a weighing of potential program benefits against the risks of litigation discovery.
Concern that the information will be disclosed to the public, in the media or otherwise, and used unfairly, e.g., out of context, to the disadvantage of the provider of the information. Another problem in some countries is public access, including media access, to information that is held by government agencies. This problem does not affect the ability of the aviation community to create GAIN-type programs, but it could affect the extent to which government agencies in some countries will be granted access to any information from GAIN. Thus, in 1996 the FAA obtained legislation, Public. Law. 104264, 49 U.S.C Section 40123, which requires it to protect voluntarily provided aviation safety information from public disclosure. This will not deprive the public of any information to which it would otherwise have access, because the agency would not otherwise receive the information; but on the other hand, there is a significant public benefit for the FAA to have the information because the FAA can use it to help prevent accidents and incidents.
As we have seen above, companies and their employees have a role to play in filtering accidents and incidents according to what they define as severe enough to report. Some organizations use incident data as an opportunity to learn by discovering their precursors and acknowledging that under slightly different circumstances, the event could have resulted in an accident. Definitions of incidents that foster learning should be open, unambiguous and sufficiently broad to allow reporters to decide whether or not to include the information. Even though reporters may not benefit directly from reporting an incident, it allows information about unknown potential hazards to be collected. Van der Schaaf (1991) argues that it is not good practice to use the same data to learn from and to police and hence incidents without injury may be a more suitable form of safety data to learn from than incidents resulting in injury which are mandatory to report and may result in litigation. An organization’s interpretation of incidents can influence its choice of information gathering methods, which in turn affects the quantity and contents of information (Tamuz, 1994).
Clarke (1998) found that train drivers’ propensity to under-report incidents depended on the type of incident, for example passing a signal at danger (SPD) was most likely to be reported. Furthermore, high levels of reporting were indicative of the priority attached to the type of incident by the organization. She also found that train drivers reported incidents that posed an immediate hazard but showed less intention to report incidents due to trespassing (even though 41% of train accidents in the UK in 1994/95 were due to vandalism). One reason given for this under-reporting was that they did not want to get someone else into trouble. Train drivers’ perceptions of management’s negative attitudes to incident reporting were found to reduce drivers’ confidence in management, their confidence in the reporting system and produced a reluctance to report even some serious incidents.
The design of the accident reporting form is another key factor in determining the percentage of accidents that will be recorded (Wright & Barnard, 1975). If it is too time consuming or difficult to complete, the process may not even begin, or the form might not be filled in completely or accurately (Pimble & O'Toole, 1982; Lawson, 1991). In two studies (Lucas, 1991; Pimble & O'Toole, 1982), the content of reporting forms was found to emphasize the consequences rather than the causes of accidents, hence complete and accurate data were not collected. Pimble and O'Toole (1982) additionally found that insufficient time is allowed for the completion of reports and hence insufficient care is taken to ensure that coding is accurate. The responsibility for accident investigation often rests with the supervisor, who is not always given the skills to do the job properly. In the past, investigators were not familiar with human factors terminology, did not know the difference between immediate and root causes and did not know how to investigate the underlying factors, therefore immediate causes became the main culprit (Stanton, 1990). Within a UK construction firm, Pimble and O'Toole (1982) found that no standard form was in place, instead the company designed their own one or adapted existing ones. Furthermore, there is often no consensus of the purpose and direction of the form (Stanton, 1990). The ideal situation would be that the same report form is used throughout industry, which would be supplemented with a single classification system (Pimble & O'Toole, 1982).
In the offshore oil industry, financial incentives have been provided to employees for having no Lost Time Incidents, with the intention of motivating the workforce to work more safely. However, financial incentives are more of an incentive to conceal accidents and incidents to avoid losing financial bonuses and to keep the accident statistics to a minimum. In a qualitative study of two UK offshore oil installations in the North Sea in 1990, Collinson (1999) described the reasons for the under-reporting of accidents in which 85 workers were interviewed regarding their opinions of safety on their installation. Although this paper was only recently published, the data are from more than 10 years ago, and safety has improved significantly in the UK offshore oil industry since then. Moreover this is a purely qualitative study, in which the examples are anecdotal and in some cases only a very small number of personnel held these opinions. Despite this, however, the study does highlight examples of sub-standard reporting procedures which existed in the UK offshore oil industry 10 years ago and which may still be present today.
Collinson (1999) stated that employees who reported incidents were sometimes indirectly disciplined by being “retrained” or by acquiring a blemished record, thereby encouraging the concealment of self-incriminating information. In addition, he found that contract workers were more likely to conceal accidents, because they perceived that being involved in an accident might influence their job security due to being employed on short-term contracts. In the study, contractors who were involved in an accident were sometimes put on light duties, rather than being sent back onshore, in order that their company did not punish them or lose out financially. In addition, collective incentive schemes that were tied to safety pay were found to encourage accident concealment and reinforce the blame culture. Management monitored performance with production targets, appraisal systems, performance-related pay, league tables, customer feedback and outsourcing. These examples of accident concealment indicate that a belief in the blame culture had a greater impact on their behavior than the safety culture espoused by management.
Other constraints to reporting include: reluctance to implicate self or colleague if subsequent investigations might threaten one’s well being; justified fear of retribution from colleagues / employers (person in authority); disloyalty to colleagues (if they focus on colleagues rather than against management).
Under-reporting by organizations can occur because they are responsible for collecting the incident data as well as responsible for reducing incident frequencies over time. In addition, it is often companies with higher reported rates of incidents that are the focus of regulatory investigation. Collinson (1999) also reported that offshore employees were encouraged not to report all incidents, so that company records were kept to a minimum. Many of the safety officers confirmed that they had been pressured into downgrading the classification of incidents, such as recording Lost Time Incidents as Restricted Workday Cases. The reason given by the safety officers for downgrading the classification of some accidents was because it meant they were asked fewer questions by onshore management. The onshore safety department was also seen as willing to downgrade classifications, as they were more concerned with achieving British safety awards than ensuring safe work practices. In summary, Collinson (1999) argues that by generating a defensive counter culture of accident and incident concealment, performance assessment was at odds with the safety culture and that under-reporting was more likely when employees fear retribution or victimization.
Different departments or work teams within an organization may be associated with distinct subcultures and different safety climates, that can influence reporting rates (Fleming et al, 1998; Mearns et al, 1998). In particular, work environments in which accident reporting is discouraged often involve “macho” role models, for example in the construction industry (Leather, 1988); offshore oil industry (Flin & Slaven, 1996; Mearns et al, 1997) and the aviation industry (O'Leary, 1995).
Researchers have found some links between incident reporting and individual differences. For example, personality in the cockpit was found to influence pilots’ propensity to report incidents, where those who scored highly on self reliance scales tended to have higher levels of guilt, as they took responsibility for mishaps whether or not they were under their control, which may lessen their likelihood of reporting (O'Leary, 1995). Trommelen (1991, cited by Clarke, 1998) postulated that workers’ propensity to report accidents reflects workers’ theories of accident causation and prevention to a greater extent than it does the actual frequency of incidents. Statements such as ‘accidents cannot be prevented’ (personal skepticism) ‘accident won’t happen to me’ (personal immunity) and incidents are just ‘part of the job’ are labeled as ‘unconstructive beliefs’ by Cox and Cox (1991).
In a questionnaire study of UK train drivers, Clarke (1998) found that very few drivers reported other drivers’ rule breaking behaviors (3%), where a third of drivers felt that rule breaking by another driver was not worth reporting. She also found that train drivers were less likely to report incidents if they considered managers would not be concerned with such reports. High levels of non-reporting were most evident when workers felt that incidents were just ‘part of the day’s work’ (i.e. ‘fatalistic attitude’) and that ‘nothing would get done’ (i.e. perceptions or beliefs that management is not committed to safety). These findings indicate that incidents are not reported because they are accepted as the norm, which was further reinforced when drivers perceived that reporting an incident would not result in any action being taken, indicating a lack of commitment by management. However, the results also indicate that drivers would be more likely to report an incident if they thought something would be done to remedy the situation.
<<< continue reading—A Roadmap to a Just Culture, Appendix C. Different Perspectives >>>
Reprinted by permission from the Global Aviation Information Network.
©2005-2006 Colorado Firecamp, Inc. | home schedule blog ENGB facility about us FAQ's |