1 INTRODUCTION
1.1 Purpose of Human Reliability Analysis (HRA)
1.1.1 Those industries which routinely use quantitative risk assessment (QRA) to assess
the frequency of system failures as part of the design process or ongoing operations
management, have recognized that in order to produce valid results it is necessary to
assess the contribution of the human element to system failure. The accepted way of
incorporating the human element into QRA and FSA studies is through the use of human
reliability analysis (HRA).
1.1.2 HRA was developed primarily for the nuclear industry. Using HRA in other
industries requires that the techniques be appropriately adapted. For example, because
the nuclear industry has many built-in automatic protection systems, consideration of
the human element can be legitimately delayed until after consideration of the overall
system performance. On board ships, the human has a greater degree of freedom to disrupt
system performance. Therefore, a high-level task analysis needs to be considered at the
outset of an FSA.
1.1.3 HRA is a process which comprises a set of activities and the potential use of a
number of techniques depending on the overall objective of the analysis. HRA may be
performed on a qualitative or quantitative basis depending on the level of FSA being
undertaken. If a full quantitative analysis is required then Human Error Probabilities
(HEPs) can be derived in order to fit into quantified system models such as fault and
event trees. However, in many instances a qualitative analysis may be sufficient. The
HRA process usually consists of the following stages:
-
.1 identification of key tasks;
-
.2 task analysis of key tasks;
-
.3 human error identification;
-
.4 human error analysis; and
-
.5 human reliability quantification.
1.1.4 Where a fully-quantified FSA approach is required, HRA can be used to develop a
set of HEPs for incorporation into probabilistic risk assessment. However, this aspect
of HRA can be over-emphasized. Experienced practitioners admit that greater benefit is
derived from the early, qualitative stages of task analysis and human error
identification. Effort expended in these areas pays dividends because an HRA exercise
(like an FSA study) is successful only if the correct areas of concern have been chosen
for investigation.
1.1.5 It is also necessary to bear in mind that the data available for the last stage of
HRA, human reliability quantification, are currently limited. Although several human
error databases have been built up, the data contained in them are only marginally
relevant to the maritime industry. In some cases where an FSA requires quantitative
results from the HRA, expert judgement may be the most appropriate method for deriving
suitable data. Where expert judgement is used, it is important that the judgement can be
properly justified as required by appendix 8 of the FSA Guidelines.
1.2 Scope of the HRA Guidance
1.2.1 Figure 4 of the FSA Guidelines shows how the HRA Guidance fits into the FSA
process.
1.2.2 The amount of detail provided in this guidance is at a level similar to that given
in the FSA Guidelines, i.e. it states what should be done and what considerations should
be taken into account. Details of some techniques used to carry out the process are
provided in the appendices of this guidance.
1.2.3 The sheer volume of information about this topic prohibits the provision of
in-depth information: there are numerous HRA techniques, and task analysis is a
framework encompassing dozens of techniques. Table 1 lists the main references which
could be pursued.
1.2.4 As with FSA, HRA can be applied to the design, construction, maintenance and
operations of a ship.
1.3 Application
It is intended that this guidance should be used wherever an FSA is conducted on a
system which involves human action or intervention which affects system performance.
2 BASIC TERMINOLOGY
Error producing condition: Factors that can have a negative effect on human
performance.
Human error: A departure from acceptable or desirable practice on the part an
individual or a group of individuals that can result in unacceptable or undesirable
results.
Human error recovery: The potential for the error to be recovered, either by the
individual or by another person, before the undesired consequences are realized.
Human error consequence: The undesired consequences of human error.
Human error probability: Defined as follows:
-
HEP =
|
Number of
human errors that have occurred
|
Number of
opportunities for human error
|
Human reliability: The probability that a person: (1) correctly performs some
system-required activity in a required time period (if time is a limiting factor) and
(2) performs no extraneous activity that can degrade the system. Human unreliability
is the opposite of this definition.
Performance shaping factors: Factors that can have a positive or negative effect
on human performance.
Task analysis: A collection of techniques used to compare the demands of a system
with the capabilities of the operator, usually with a view to improving performance,
e.g. by reducing errors.
3 METHODOLOGY
HRA can be considered to fit into the overall FSA process in the following way:
-
.1 identification of key human tasks consistent with step 1;
-
.2 risk assessment, including a detailed task analysis, human error analysis and
human reliability quantification consistent with step 2; and
-
.3 risk control options consistent with step 3.
4 PROBLEM DEFINITION
Additional human element issues which may be considered in the problem definition
include:
-
.1 personal factors, e.g. stress, fatigue;
-
.2 organizational and leadership factors, e.g. manning level;
-
.3 task features, e.g. task complexity; and
-
.4 onboard working conditions, e.g. human-machine interface.
5 HRA STEP 1 IDENTIFICATION OF HAZARDS
5.1 Scope
5.1.1 The purpose of this step is to identify key potential human interactions which, if
not performed correctly, could lead to system failure. This is a broad scoping exercise
where the aim is to identify areas of concern (e.g. whole tasks or large sub-tasks)
requiring further investigation. The techniques used here are the same as those used in
step 2, but in step 2 they are used much more rigorously.
5.1.2 Human hazard identification is the process of systematically identifying the ways
in which human error can contribute to accidents during normal and emergency operations.
As detailed in paragraph 5.2.2 below, standard techniques such as Hazard and Operability
(HazOp) study and Failure Mode and Effects Analysis (FMEA) can be, and are, used for
this purpose. Additionally, it is strongly advised that a high-level functional task
analysis is carried out. This section discusses those techniques which were developed
solely to address human hazards.
5.2 Methods for hazard identification
5.2.1 In order to carry out a human hazard analysis, it is first necessary to model the
system in order to identify the normal and emergency operating tasks that are carried
out by the crew. This is achieved by the use of a high-level task analysis (as described
in table 2) which identifies the main human tasks in terms of operational goals.
Developing a task analysis can utilize a range of data collection techniques, e.g.
interviews, observation, critical incident, many of which can be used to directly
identify key tasks. Additionally, there are many other sources of information which may
be consulted, including design information, past experience, normal and emergency
operating procedures, etc.
5.2.2 At this stage it is not necessary to generate a lot of detail. The aim is to
identify those key human interactions which require further attention. Therefore, once
the main tasks, sub-tasks and their associated goals have been listed, the potential
contributors to human error of each task need to be identified together with the
potential hazard arising. There are a number of techniques which may be utilized for
this purpose, including human error HazOp, Hazard Checklists, etc. An example of
human-related hazards identifying a number of different potential contributors to
sub-standard performance is included in table 3.
5.2.3 For each task and sub-task identified, the associated hazards and their associated
scenarios should be ranked in order of their criticality in the same manner as discussed
in section 5.2.2 of the FSA Guidelines.
5.3 Results
The output from step 1 is a set of activities (tasks and sub-tasks) with a ranked list
of hazards associated with each activity. This list needs to be coupled with the other
lists generated by the FSA process, and should therefore be produced in a common format.
Only the top few hazards for critical tasks are subjected to risk assessment; less
critical tasks are not examined further.
6 HRA STEP 2 RISK ANALYSIS
6.1 Scope
The purpose of step 2 is to identify those areas where the human element poses a high
risk to system safety and to evaluate the factors influencing the level of risk.
6.2 Detailed task analysis
6.2.1 At this stage, the key tasks are subjected to a detailed task analysis. Where the
tasks involve more decision-making than action, it may be more appropriate to carry out
a cognitive task analysis. Table 2 outlines the extended task analysis which was
developed for analysing decision-making tasks.
6.2.2 The task analysis should be developed until all critical sub-tasks have been
identified. The level of detail required is that which is appropriate for the
criticality of the operation under investigation. A good general rule is that the amount
of detail required should be sufficient to give the same degree of understanding as that
provided by the rest of the FSA exercise.
6.3 Human error analysis
6.3.1 The purpose of human error analysis is to produce a list of potential human errors
that can lead to the undesired consequence that is of concern. To help with this
exercise, some examples of typical human errors are included in figure 1.
6.3.2 Once all potential errors have been identified, they are typically classified
along the following lines. This classification allows the identification of a critical
subset of human errors that must be addressed:
-
.1 the supposed cause of the human error;
-
.2 the potential for error-recovery, either by the operator or by another person
(this includes consideration of whether a single human error can result in
undesired consequences); and
-
.3 the potential consequences of the error.
6.3.3 Often, a qualitative analysis should be sufficient. A simple qualitative
assessment can be made using a recovery/consequence matrix such as that illustrated in
figure 2. Where necessary, a more detailed matrix can be developed using a scale for the
likely consequences and levels of recovery.
6.4 Human error quantification
6.4.1 This activity is undertaken where a probability of human error (HEP) is required
for input into a quantitative FSA. Human error quantification can be conducted in a
number of ways.
6.4.2 In some cases, because of the difficulties of acquiring reliable human error data
for the maritime industry, expert judgement techniques may need to be used for deriving
a probability for human error. Expert judgment techniques can be grouped into four
categories:
-
.1 paired comparisons;
-
.2 ranking and rating procedures;
-
.3 direct numerical estimation; and
-
.4 indirect numerical estimation.
It is particularly important that experts are provided with a thorough task definition.
A poor definition invariably produces poor estimates.
6.4.3 Absolute Probability Judgement (APJ) is a good direct method. It can be used in
various forms, from the single expert assessor to large groups of individuals whose
estimates are mathematically aggregated (see table 4). Other techniques which focus on
judgements from multiple experts include: brainstorming; consensus decision-making;
Delphi; and the Nominal Group technique.
6.4.4 Alternatives to expert opinion are historic data (where available) and generic
error probabilities. Two main methods for HRA which have databases of human error
probabilities (mainly for the nuclear industry) are the Technique for Human Error Rate
Prediction (THERP) and Human Error Assessment and Reduction Technique (HEART) (see table
4).
6.4.5 Technique for Human Error Rate Prediction (THERP)
THERP was developed by Swain and Guttmann (1983) of Sandia National Laboratories for the
US Nuclear Regulatory Commission, and has become the most widely used human error
quantitative prediction technique. THERP is both a human reliability technique and a
human error databank. It models human errors using probability trees and models of
dependence, but also considers performance shaping factors (PSFs) affecting action. It
is critically dependent on its database of human error probabilities. It is considered
to be particularly effective in quantifying errors in highly procedural activities.
6.4.6 Human Error Assessment and Reduction Technique (HEART)
HEART is a technique developed by Williams (1985) that considers particular ergonomics,
tasks and environmental factors that adversely affect performance. The extent to which
each factor independently affects performance is quantified and the human error
probability is calculated as a function of the product of those factors identified for a
particular task.
6.4.7 HEART provides specific information on remedial risk control options to combat
human error. It focuses on five particular causes and contributions to human error:
impaired system knowledge; response time shortage; poor or ambiguous system feedback;
significant judgement required of operator; and the level of alertness resulting from
duties, ill health or the environment.
6.4.8 When applying human error quantification techniques, it is important to consider
the following:
-
.1 Magnitudes of human error are sufficient for most applications. A "gross"
approximation of the human error magnitude is sufficient. The derivation of HEPs
may be influenced by modelling and quantitative uncertainties. A final sensitivity
analysis should be presented to show the effect of uncertainties on the estimated
risks.
-
.2 Human error quantification can be very effective when used to produce a
comparative analysis rather than an exact quantification. Then human error
quantification can be used to support the evaluation of various risk control
options.
-
.3 The detail of quantitative analysis should be consistent with the level of
detail of the FSA model. The HRA should not be more detailed than the technical
elements of the FSA. The level of detail should be selected based upon the
contribution of the activity to the risk, system or operation being analysed.
-
.4 The human error quantification tool selected should fit the needs of the
analysis. There are a significant number of human error quantification techniques
available. The selection of a technique should be assessed for consistency,
usability, validity of results, usefulness, effective use of resources for the HRA
and the maturity of the technique.
6.5 Results
6.5.1 The output from this step comprises:
-
.1 an analysis of key tasks;
-
.2 an identification of human errors associated with these tasks; and
-
.3 an assessment of human error probabilities (optional).
6.5.2 These results should then be considered in conjunction with the high-risk areas
identified elsewhere in step 2.
7 HRA STEP 3 RISK CONTROL OPTIONS
7.1 Scope
The purpose of step 3 is to consider how the human element is considered within the
evaluation of technical, human, work environment, personnel and management-related risk
control options.
7.2 Application
7.2.1 The control of risks associated with the human interaction with a system can be
approached in the same way as for the development of other risk control measures.
Measures can be specified in order to:
-
.1 reduce the frequency of failure;
-
.2 mitigate the effects of failure;
-
.3 alleviate the circumstances in which failures occur; and
-
.4 mitigate the consequences of accidents.
7.2.2 Proper application of HRA can reveal that technological innovations can also
create problems which may be overlooked by FSA evaluation of technical factors only. A
typical example of this is the creation of long periods of low workload when a high
degree of automation is used. This in turn can lead to an inability to respond correctly
when required or even to the introduction of "risk-taking behaviour" in order to make
the job more interesting.
7.2.3 When dealing with risk control concerning human activity, it is important to
realize that more than one level of risk control measure may be necessary. This is
because human involvement spans a wide range of activities from day-to-day operations
through to senior management levels. Secondly, it must also be stressed that a basic
focus on good system design utilizing ergonomics and human factor principles is needed
in order to achieve enhanced operational safety and performance levels.
7.2.4 In line with figure 3 of the FSA Guidelines, risk control measures for human
interactions can be categorized into four areas as follows: (1) technical/engineering
subsystem, (2) working environment, (3) personnel subsystem and (4)
organizational/management subsystem. A description of the issues that may be considered
within each of these areas is given in figure 3.
7.2.5 Once the risk control measures have been initially specified, it is important to
reassess human intervention in the system in order to assess whether any new hazards
have been introduced. For example, if a decision had been taken to automate a particular
task, then the new task would need to be re-evaluated.
7.3 Results
The output from this step comprises a range of risk control options categorized into 4
areas as presented in figure 3, easing the integration of human-related risk into step
3.
8 HRA STEP 4 COST-BENEFIT ASSESSMENT
No specific HRA guidance for this section is required.
9 HRA STEP 5 RECOMMENDATIONS FOR DECISION-MAKING
Judicious use of the results of the HRA study should contribute to a set of balanced
decisions and recommendations of the whole FSA study.
FIGURE 1
TYPICAL HUMAN ERRORS
Physical Errors
|
Mental Errors
|
Action omitted
|
Lack of knowledge of system/situation
|
Action too much/little
|
Lack of attention
|
Action in wrong direction
|
Failure to remember procedures
|
Action mistimed
|
Communication breakdowns
|
Action on wrong object
|
Miscalculation
|
FIGURE 2
RECOVERY/CONSEQUENCE MATRIX
|
|
|
|
|
|
High
|
May need to consider
|
MUST CONSIDER
|
|
Consequence
|
Low
|
No need to consider
|
May need to consider
|
|
|
|
High
|
Low
|
|
|
|
Recovery
|
|
FIGURE 3
EXAMPLES OF RISK CONTROL OPTIONS
Technical/engineering subsystem
-
ergonomic design of equipment and work spaces
-
good layout of bridge, machinery spaces
-
ergonomic design of the man-machine interface/human computer interface
-
specification of information requirements for the crew to perform their tasks
-
clear labelling and instructions on the operation of ship systems and control/
communications equipment
Working environment
-
ship stability, effect on crew of working under conditions of pitch/roll
-
weather effects, including fog, particularly on watch-keeping or external tasks
-
ship location, open sea, approach to port, etc.
-
appropriate levels of lighting for operations and maintenance tasks and for day
and night time operations
-
consideration of noise levels (particularly for effect on communications)
-
consideration of the effects of temperature and humidity on task performance
-
consideration of the effects of vibration on task performance
Personnel subsystem
-
development of appropriate training for crew members
-
crew levels and make up
-
language and cultural issues
-
workload assessment (both too much and too little workload can be problematic)
-
motivational and leadership issues
Organizational/management subsystem
-
development of organization policies on recruitment, selection, training, crew
levels and make up, competency assessment, etc.
-
development of operational and emergency procedures (including provisions for tug
and salvage services)
-
use of safety management systems
-
provision of weather forecasting/routeing services
TABLE 1
REFERENCES
1 Advisory Committee on the Safety of Nuclear Installations (1991) Human Factors
Study Group Second Report: Human reliability assessment a critical overview.
2 Annett, J. and Stanton, N.A. (1998) Special issue on task analysis. Ergonomics,
41(11).
3 Ball, P.W. (1991) The guide to reducing human error in process operations. Human
Factors in Reliability Group, SRDA R3, HMSO.
4 Gertman, D.I. and Blackman, H.S. (1994) Human Reliability and Safety Analysis Data
Handbook. Wiley & Sons: New York.
5 Hollnagel, E. (1998) Cognitive Reliability and Error Analysis Method. Elsevier
Applied Science: London.
6 Human Factors in Reliability Group (1995) Improving Compliance with Safety
Procedures Reducing Industrial Violations. HSE Books: London.
7 Humphreys, P. (ed.) (1995) Human Reliability Assessor's Guide: A report by the
Human Factors in Reliability Group: Cheshire.
8 Johnson, L. and Johnson, N.E. (1987) A Knowledge Elicitation Method for Expert Systems
Design. Systems Research and Info. Science, Vol.2, 153-166.
9 Kirwan, B. (1992) Human error identification in human reliability assessment. Part I:
Overview of approaches. Applied Ergonomics, 23(5), 299-318.
10 Kirwan, B. (1997) A validation of three Human Reliability Quantification techniques
THERP, HEART and JHEDI: Part III - Results and validation exercise. Applied
Ergonomics, 28(1), 27-39.
11 Kirwan, B. (1994) A Guide to Practical Human Reliability Assessment. Taylor
& Francis: London.
12 Kirwan, B. and Ainsworth, L.K. (1992) A Guide to Task Analysis. London: Taylor
& Francis.
13 Kirwan, B., Kennedy, R., Taylor-Adams, S. and Lambert, B. (1997) A validation of
three Human Reliability Quantification techniques THERP, HEART and JHEDI: Part II
Practical aspects of the usage of the techniques. Applied Ergonomics, 28(1),
17-25.
14 Lees, F. (1996) Human factors and human element. Loss Prevention in the Process
Industries: Hazard Identification, Assessment and Control. Vol. 3. Butterworth
Heinemann.
15 Pidgeon, N., Turner, B. and Blockley, D. (1991) The use of Grounded Theory for
conceptual analysis in knowledge elicitation. International Journal of Man-Machine
Studies, Vol.35, 151-173.
16 Rasmussen, J., Pedersen, O.M., Carino, A., Griffon, M., Mancini, C., and Gagnolet, P.
(1981) Classification system for reporting events involving human malfunctions.
Report Riso-M-2240, DK-4000. Roskilde, Riso National Laboratories, Denmark.
17 Swain, A.D. (1989) Comparative Evaluation of Methods for Human Reliability
Analysis. Gesellschaft für Reaktorsicherheit (GRS) mbH.
18 Swain, A.D. and Guttmann, H.E. (1983) Handbook of Human Reliability Analysis with
Emphasis on Nuclear Power Plant Applications: Final Report. NUREG/CR 1278. U.S.
Nuclear Regulatory Commission.
19 Williams, J.C. (1986) HEART A proposed method for assessing and reducing human
error. Proceedings, 9th Advances in Reliability Technology
Symposium, University of Bradford. NCRS, UKAEA. Culcheth, Cheshire.
TABLE 2
SUMMARY OF TASK ANALYSIS TYPES
1 High-level task analysis
1.1 High-level task analysis here refers to the type of task analysis which allows an
analyst to gain a broad but shallow overview of the main functions which need to be
performed to accomplish a particular task.
1.2 High-level task analysis is undertaken in the following way:
-
.1 describe all operations within the system in terms of the tasks required to
achieve a specific operational goal; and
-
.2 consider goals associated with normal operations, emergency procedures,
maintenance and recovery measures.
1.3 The analysis is recorded either in a hierarchical format or in tabular form.
2 Detailed task analysis
2.1 Detailed task analysis is undertaken to identify:
-
.1 the overall task (or job) that is done;
-
.2 sub-tasks;
-
.3 all of the people who contribute to the task and their interactions;
-
.4 how the work is done, i.e. the working practices in normal and emergency
situations;
-
.5 any controls, displays, tools, etc. which are used; and
-
.6 factors which influence performance.
2.2 There are many task analysis techniques - Kirwan and Ainsworth (1992) list more than
twenty. They note that the most widely used, hierarchical task analysis (HTA), can be
used as a framework for applying other techniques:
-
.1 data collection techniques, e.g. activity sampling, critical incident,
questionnaires;
-
.2 task description techniques, e.g. charting and network techniques, tabular task
analysis;
-
.3 tasks simulation methods, e.g. computer modelling and simulation;
-
.4 task behaviour assessment methods, e.g. management and oversight risk trees;
and
-
.5 task requirement evaluation methods, e.g. ergonomics checklists.
3 Extended task analysis (XTA)
3.1 Traditional task analysis was designed for investigating manual tasks, and is not so
useful for analysing intellectual tasks, e.g. navigation decisions. Extended task
analysis or other cognitive task analyses (see Annett and Stanton, 1998) can be used
where the focus is less on what actions are performed and more on understanding the
rationale for the decisions that are taken.
3.2 XTA is used to map out the logical bases of the decision-making process which
underpin the task under examination. The activities which comprise XTA techniques are
described in Johnson and Johnson (1987). In summary, they are:
-
.1 Interview. The interviewer asks about the conditions which enable or disable
certain actions to be performed, and how a change in the conditions affects those
choices. The interviewer examines the individual's intentions to make sure that
all relevant aspects of the situation have been taken into account. This enables
the analyst to build up a good understanding of what the individual is doing and
why, and how it would change under varying conditions.
-
.2 Qualitative analysis of data. The interview is tape-recorded, transcribed and
subsequently analysed. Methods for analysing qualitative data are well-established
in social science and more recently utilized in safety engineering. The technique
(called Grounded Theory) is described in detail by Pidgeon et al. (1991).
-
.3 Representation of the analysis in an appropriate format. The representation
scheme used in XTA is called systemic grammar networks a form of associative
network see Johnson and Johnson (1987).
-
.4 Validation activities, e.g. observation, hypothesis.
TABLE 3
EXAMPLES OF HUMAN-RELATED HAZARDS
1 Human error occurs on board ships when a crew member's ability falls below what is
needed to successfully complete a task. Whilst this may be due to a lack of ability,
more commonly it is because the existing ability is hampered by adverse conditions.
Below are some examples (not complete) of personal factors and unfavourable conditions
which constitute hazards to optimum performance. A comprehensive examination of all
human-related hazards should be performed. During the "design stage" it is typical to
focus mainly on task features and on board working conditions as potential human-related
hazards.
2 Personal factors
-
.1 Reduced ability, e.g. reduced vision or hearing;
-
.2 Lack of motivation, e.g. because of a lack of incentives to perform well;
-
.3 Lack of ability, e.g. lack of seamanship, unfamiliarity with vessel, lack of
fluency of the language used on board;
-
.4 Fatigue, e.g. because of lack of sleep or rest, irregular meals; and
-
.5 Stress.
3 Organizational and leadership factors
-
.1 Inadequate vessel management, e.g. inadequate supervision of work, lack of
coordination of work, lack of leadership;
-
.2 Inadequate shipowner management, e.g. inadequate routines and procedures, lack
of resources for maintenance, lack of resources for safe operation, inadequate
follow-up of vessel organization;
-
.3 Inadequate manning, e.g. too few crew, untrained crew; and
-
.4 Inadequate routines, e.g. for navigation, engine-room operations, cargo
handling, maintenance, emergency preparedness.
4 Task features
-
.1 Task complexity and task load, i.e. too high to be done comfortably or too low
causing boredom;
-
.2 Unfamiliarity of the task;
-
.3 Ambiguity of the task goal; and
-
.4 Different tasks competing for attention.
5 Onboard working conditions
-
.1 Physical stress from, e.g. noise, vibration, sea motion, climate, temperature,
toxic substances, extreme environmental loads, night-watch;
-
.2 Ergonomic conditions, e.g. inadequate tools, inadequate illumination,
inadequate or ambiguous information, badly-designed human-machine interface;
-
.3 Social climate, e.g. inadequate communication, lack of cooperation; and
-
.4 Environmental conditions, e.g. restricted visibility, high traffic density,
restricted fairway.
TABLE 4
SUMMARY OF HUMAN ERROR ANALYSIS TECHNIQUES
The two main HRA quantitative techniques (HEART and THERP) are outlined below. CORE-DATA
provides data on generic probabilities. As the data from all of these sources are based
on non-marine industries, they need to be used with caution. A good alternative is to
use expert judgement and one technique for doing this is Absolute Probability Judgement.
1 Absolute Probability Judgement (APJ)
1.1 APJ refers to a group of techniques that utilize expert judgement to develop human
error probabilities (HEPs) detailed in Kirwan (1994) and Lees (1996). These techniques
are used when no relevant data exist for the situation in question, making some form of
direct numerical estimation the only way of developing values for HEPs.
1.2 There are a variety of techniques available. This gives the analyst some flexibility
in accommodating different types of analysis. Most of the techniques avoid potentially
detrimental group influences such as group bias. Typically the techniques used are: the
Delphi technique, the Nominal Group Technique and Paired Comparisons. The number and
type of experts that are required to participate in the process are similar to that
required for Hazard Identification techniques such as HazOp.
1.3 Paired Comparisons is a significant expert judgement technique. Using this
technique, an individual makes a series of judgements about pairs of tasks. The results
for each individual are analysed and the relative values for HEPs for the tasks derived.
Use of the technique rests upon the ability to include at least two tasks with known
HEPs. CORE-DATA and data from other industries may be useful.
1.4 The popularity of these techniques has reduced in recent times, probably due to the
requirement to get the relevant groups of experts together. However, these techniques
may be very appropriate for the maritime industry.
2 Technique for Human Error Rate Prediction (THERP)
2.1 THERP is one of the best known and most often utilized human reliability analysis
techniques. At first sight the technique can be rather daunting due to the volume of
information provided. This is because it is a comprehensive methodology covering task
analysis, human error identification, human error modelling and human error
quantification. However, it is best known for its human error quantification aspects,
which includes a series of human error probability (HEP) data tables and data
quantifying the effects of various performance shaping factors (PSFs). The data
presented is generally of a detailed nature and so not readily transferable to the
marine environment.
2.2 THERP contains a dependence model which is used to model the dependence relationship
between errors. For example, the model could be used to assess the dependence between
the helmsman making an error and the bridge officer noticing it. Operational experience
does show that there are dependence effects between people and between tasks. Whilst
this is the only human error model of its type, it has not been comprehensively
validated.
2.3 A full THERP analysis can be resource-intensive due to the level of detail required
to utilize the technique properly. However, the use of this technique forces the analyst
to gain a detailed appreciation of the system and of the human error potential. THERP
models humans as any other subsystem in the FSA modelling process. The steps are as
follows:
-
.1 identify all the systems in the operation that are influenced and affected by
human operations;
-
.2 compile a list and analyse all human operations that affect the operations of
the system by performing a detailed task analysis;
-
.3 determine the probabilities of human errors through error frequency data and
expert judgements and experiences; and
-
.4 determine the effects of human errors by integrating the human error into the
PRA modelling procedure.
2.4 THERP includes a set of performance shaping factors (PSFs) that influence the human
errors at the operator level. These performance factors include experience, situational
stress factors, work environment, individual motivation, and the human-machine
interface. The PSFs are used as a basis for estimating nominal values and value ranges
for human error.
2.5 There are advantages to using THERP. First, it is a good tool for relative risk
comparisons. It can be used to measure the role of human error in an FSA and to evaluate
risk control options not necessarily in terms of a probability or frequency, but in
terms of risk magnitude. Also, THERP can be used with the standard event-tree/fault-tree
modelling approaches that are sometimes preferred by FSA practitioners. THERP is a
transparent technique that provides a systematic, well-documented approach to evaluating
the role of human errors in a technical system. The THERP database can be used through
systematic analysis or, where available, external human error data can be inserted.
3 Human Error Assessment and Reduction Technique (HEART)
3.1 HEART is best known as a relatively simple way of arriving at human error
probabilities (HEPs). The basis of the technique is a database of nine generic task
descriptions and an associated human error probability. The analyst matches the generic
task description to the task being assessed and then modifies the generic human error
probability according to the presence and strength of the identified error producing
conditions (EPCs). EPCs are conditions that increase the order of magnitude of the error
frequency or probability measurements, similar in concept to PSFs in THERP. A list of
EPCs is supplied as part of the technique, but it is up to the analyst to decide on the
strength of effect for the task in question.
3.2 Whilst the generic data is mainly derived from the nuclear industry, HEART does
appear amenable to application within other industries. It may be possible to tailor the
technique to the marine environment by including new EPCs such as weather. However, it
needs careful application to avoid ending up with very conservative estimates of
HEPs.
4 CORE-DATA
4.1 CORE-DATA is a database of human error probabilities. Access to the database is
available through the University of Birmingham in the United Kingdom. The database has
been developed as a result of sponsorship by the UK Health and Safety Executive with
support from the nuclear, rail, chemical, aviation and offshore industries and contains
up to 300 records as of January 1999.
4.2 Each record is a comprehensive presentation of information including, e.g. a task
summary, industry origin, country of origin, type of data collection used, a database
quality rating, description of the operation, performance shaping factors, sample size
and HEP.
4.3 As with all data from other industries, care needs to be taken when transferring the
data to the maritime industry. Some of the offshore data may be the most useful.