User login


You are here

Topic 13: Safety, Reliability and Integrity Mangement Processes and the Human Factor effect

michael saiki's picture

Throughout History Man has experienced alot of severe and fatal incidences that could have been averted or most assuredly minimized if certain actions were taken or decision making processes where in place.

If we look at some of the worst disasters in history we could most certainly conclude that human interactions with those systems contributed a great deal to these fatalities much more than the system failures. In other words these disasters could have been averted or minimized if the negligence or inactons of Stakholders at different levels in the chain is properly controlled.

so here is the puzzle, Most organisations design Integrity management processes around assets eg(Stuctural, Pipelines and Subsea Facilities Integrity)

 Is it possible to estimate the influence of Human activity on the Integrity, Safety and Reliability of these Systems or can we develop a human activity Integrity management process, so, we can have a feedback system each time and  we can have at each time activity impact assessment to track risky or hazardous human activities or inactivities. Example, in the Deep Water Horizon incident if we had had an integrated feedback system that indicates that if the well integrity Assessment was not done or other cementing process tests not done properly the next phase of the drilling process cannot continue, it may have reduced the impact.


oseghale lucas okohue's picture

Major offshore catastrophes like the Piper Alpha explosion and the Deep Water Horizon incident have brought about significant changes to the oil and gas industry especially in the area of health and safety practices and regulations. Presently it is estimated that about 80 percent of all oil and gas accidents including offshore , onshore and the marine activities etc.  is attributed to Human and Organization shortcomings or errors. Since it is neither possible or preferable in most cases to “design out” then it is justifiable to design a human activity integrity management process that can be used to assess the influence of human activities on the systems so that we can have feedback systems each time and  we can have at each time activity impact assessment to track risky or hazardous human activities or in activities. 

Uchenna Onyia's picture

As we all know, and I invite anyone to prove me wrong,
humans are fallible creatures. No one can boast of being perfect or gone
through life without making mistakes. 
Mistakes are how we learn. It has been said that trial and error is the most
recognised form of learning and experience is the best teacher.  According to the Health and Safety executive
and I quote, “four fifth of all recorded accidents can be attributed, to some
extent, to human errors and/or violations”. Examples of some recorded accidents
involving human failure include;

The three mile island nuclear reactor incident

The Kings Cross underground station fire;

The Chaplin junction rail crash; and,

The sinking of Herald of fire enterprise.


uchenna onyia 51232632 MSc Subsea Engineering

Oluwasegun Onasanya's picture

Companies are increasingly realizing that achieving safety, reliability and plant integrity targets requires a holistic approach to integrity management. They are beginning to realize that safety, integrity and reliability are all linked. In essence, they are all manifestations of a management system that should be operating effectively to manage risk and yet achieve the commercial goals of the business.

Incorporating safety, along with other company values such as quality and productivity, will help to create a positive safety culture throughout the organization. The accepted philosophy is that in order for a safety program to be effective, regardless of the specific industry or occupation, it needs the support of senior leaders on down. In order to establish a strong and efficient safety culture, management must “walk the walk” and safety must traverse all lines of the organization, from senior executives to field workers conducting subsurface investigations.It’s well established that a strong safety program makes good financial sense and may also present a return on investment. Reducing accidents and enhancing employee safety awareness may drive down insurance rates, the potential for fines, worker compensation rates and lost production time.

In many industries, managers work hard to reduce equipment downtime and improve reliability. To determine what must be done to ensure that a physical asset continues to do what its users want it to do in its present operating environment, managers use reliability-centered maintenance (RCM), which helps to define a complete maintenance regimen and is a process that can form a vital part of a company’s preventive maintenance program.

RCM is a term that’s widely used in industry. However, it isn’t always well understood. It requires plant personnel to monitor, assess, predict, and generally understand how physical assets work. This identifies certain failure modes, thereby enabling appropriate maintenance tasks to be established. RCM is described as “a systematic approach to defining a routine maintenance program composed of cost-effective tasks that preserve important functions.”

To develop a pragmatic integrity program, it is necessary to take into consideration all the relevant factors and to identify those where the business, site or plant can set clear and realistic targets.

Three main elements need to be considered when undertaking any integrity management review and involve the reliability and integrity of the assets themselves, the effectiveness of the systems and procedures that are in place to control operation and maintenance of the assets and the knowledge and competence of the workforce that is managing and maintaining the assets.To try to gauge asset life extension, many companies have introduced initiatives to maintain integrity or improve reliability such as criticality assessments, risk-based inspection (RBI) and reliability-centered maintenance (RCM). 

Human factors become increasingly important — from management understanding and support, communications across the lifecycle stages, establishment of effective information systems and sufficient understanding of the design and construction features and deterioration mechanisms by all the relevant groups (plant terms and external specialist resources) within the business.


1. Principles of a safety culture.

2. Prevention is better than cure.

3. Manage integrity risks of aging assets. 


FELIXMAIYO's picture

Want to agree with most of my colleagues
that major accidents have resulted from human factors in most of the energy
accidents. I am going to look at some major accidents and show how human factors
influenced these accidents. Banqiao dam disaster is the worst to be recorded on
the human history and this accident could have been averted had proper action
been taken in advance. Before this accident the was ignorance of communication because
a request was made for the dam to open on August 6th but the request
was ignored due to the existing floods in the downstream areas. Late on august
7th another telegraph was sent warning of the dam failure, but in a
matter of three hours, the Shimantan dam broke. Within 30 mins, the water from
the dam crested the Banquio dam [1]. Also during design stage a recommendation
was made for the dam to have 12 gates but it was ignored instead it had 5 gates
and this could not withstand the volume of the flow of water.

During the piper alpha disaster,
human factors contributed a major part and they were blamed to be the root
cause of the tragic accident. The main reason wqas blamed on the company
management handling of safety on the platform as stated in the official report
of the accident by Lord Culler. First failure in management on safety was to
work to permit (PTW) system did not use properly. Inadequate communication
which contributed to fatalities and civil conviction of the company.

All the major human factors can
be summarised as:[2]

Organisational change(and
transition management)

Staffing arrangement
and workload

Training and
competence(and supervision)

Fatigue(from shift
work and maintenance)

Human factors in


Alarm handling

Control rooms

Procedures (especially
safety critical procedures)

Organisational culture(and

Communication and

Integration of human
factors into risk assessment and investigation (including safety management

Managing human failures
(including maintenance errors )





Kobina Gyan Budu's picture

This is
one area in Risk Management that need critical attention. A Safety, Reliability
and Integrity Management Process is as good as the human resource driving it.
In the history of industrial accidents, human factors have been the majority of
the root causes. It is worth taking a critical look at the various accidents
independently to see how true this is, and I start with the Piper Alpha
Accident (July 06, 1988) as below.

Piper Alpha Accident (July 06, 1988)

were two gas pipelines of size 16″ and 18″ connecting Piper Alpha from the
Claymore and Tartan platforms. As these pipelines were very long, reducing flow
pressure in them in case of fire will take several hours, making it difficult
for firefighting. A study ordered by the operator (Occidental) two years
earlier warned of the dangers of these pipelines. Even though Occidental agreed
with this warning, nothing was done to ensure the connecting platforms would be
switched off in case of an emergency on Piper Alpha. Why? Purely a human factor in action.

location of the control room that had most of the people who had the authority
to order an evacuation was not safely sited. Why? The gaps in platform
engineering design. Human factor.

After the
first explosion on Piper, the undesired consequence of the failure event would
have been less had the adjacent connected platforms Tartan and Claymore not
continued to pump oil and gas to Piper Alpha. Why was there continuous pumping
of fuel into an already burning platform? Because the operations crews of these
connected platforms did not believe they had authority to shut off production,
how ridiculous? Infact the connecting pipeline to the Tartan  platform continued pumping because its
manager had been directed by his superior to do so as he was following a
procedure to avert a huge financial cost related to a shut down. What a gross
neglect of safety? However, pumping was stopped after the second explosion, now
regardless of the financial consequences.

The operating
of a condensate pump (Pump A) was a major contributing factor to the first
explosion. This pump had its pressure safety valve (PSV #504) removed for
routine maintenance, sounds good. The
pump (pump A) itself had a fortnightly overhaul plan (sounds good) but was not followed assiduously (human factor). A
morning shift leader’s failure (human factor) to properly communicate to the
night shift leader led to the operating of condensate pump A which was not to
be operated under any circumstances. As to whether there was a “Shift Handover
Process” in place is another question which bothers on human factors.
Interestingly, there was another permit issued for the general overhaul of the
same Pump A that had not yet begun, so there were two different permits for the
same job on the same piece of equipment (another confusion – human factor). Had
the first permit been followed, the job done and closed out properly, would it
have necessitated the second and the confusion? Human factor.

earlier audit had suggested that a procedure be developed to keep some suction
pumps in automatic mode whenever divers were not working close to the intakes,
but what happened? Interestingly, this was never developed or implemented.
Gross neglect of safety on display (human factor).

After the
first explosion on Piper, a custodian managed to press the emergency stop
button, closing a big valves in the sea lines and therefore stopping all oil
and gas production. This act should have safely isolated the platform from
further fuel and would have made containment easier. That was not to be because
the firewalls broke due to the first explosion. Why did the firewalls break?
Because they were not designed to resist explosion but fire as the platform was
originally built for oil and not for gas. Can this be a tangible reason?
Certainly not. If the platform was originally built for oil and it had to also
handle gas at a later stage, all we had to do was to put in place a change
management process (re-assess the risk, identify new ones, do appropriate
modifications if needs be and formalise the existing procedures to accommodate
the change). Change management process is vital in risk management. This was
not done (human factor).,
copied on October 08, 2012





/* Style Definitions */
{mso-style-name:"Table Normal";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-bidi-font-family:"Times New Roman";

Craig Donaldson's picture

When considering human error I think it is necessary to break it into two categories, immediate or active errors, which are often errors made by an individual on the front-line, and latent errors which include factors such as job factors, management factors, organisational factors and design factors.

I will first look at active errors. Human error can be explained by reference to how individuals handle information. A systematic approach to
human error must involve a classification of error that can be recognised, assessed and managed
effectively in order to control risks. There are many differing opinions on this but one of the best and
most commonly used is based on Rasmussen's performance levels [1] and this has been used as a framework to classify error types. 

The three performance levels are:

• Skill based
• Rule based
• Knowledge based

At the skill based level we carry out routine, highly practised tasks in a largely automated fashion,
except for occasional conscious checks of performance. People tend to be very good at this for most of the time.

We switch to the rule based level when there is a need to programme our largely pre-programmed
behaviour, i.e. we have taken account of some change in the situation. This is often developed by
training and/or experience. It is called the rule-based level because we apply stored rules.

The knowledge-based level is something we come to very reluctantly. Only when we have repeatedly
failed to find a solution using known methods do we resort to the slow and highly error-prone business of thinking things through on the spot.


1. Rasmussen, J 1982, 'Human Errors - A Taxonomy for Describing Human Malfunction in Industrial Installations' Journal of Occupational Accidents, vol 4, no. 2-4, pp. 311-333.

Uchenna Onyia's picture



Human error can
never be totally eliminated and there has been much research carried out on how
to quantify this risk.  Research shows
that there could be as many as 38 factors to be considered at five different cognitive
levels.  The various methods for
assessing human reliability require considerable knowledge and experience on
the part of the user and were originally developed by the nuclear
industry.  They are also useful in
assessing operator risk in process operations. 
One of such method developed by Bello and Columbori, known as TESCO is a
method using only 5 factors and is suitable for assessing operator response in
a control room type situation.


uchenna onyia 51232632

MSc Subsea Engineering 

Ambrose Ssentongo's picture

Ambrose Ssentongo

Ambrose Ssentongo's picture

Ambrose Ssentongo

Ambrose Ssentongo's picture

Ambrose Ssentongo

The human effect in any safety, reliability and integrity
management process in my opinion is largely influenced by two factors namely;
how people perceive risk they’re faced with, and communication. These two
should be critical during the process of designing any safety system and can be
addressed by;

First, endeavoring to understand people’s/users’ level of
knowledge (what they know about a system, operation) and identifying potential
threats to the decision making process in the event that one is faced with a
risk or accident.
  An example
of a threat is misinterpretation of a signal
 like what is reported to have been an attribute to the Deep Water
Horizon accident - the negative test results were misinterpreted and test
deemed to have passed yet on the contrary; another potential threat is
anticipating a risk and
to react appropriately
 - say a
fire breaks out and the first person to identify this doesn’t hit the alarm but
instead runs to the evacuation point for his own safety even if he/she had the
chance to hit the alarm, this may be caused by panic! These such considerations
will help the designer/Safety Engineer know what his communication should

Secondly it is important to have a means of measurement of the
effectiveness of the communication; that is, to find out if people have
understood (It is a common mistake to take for granted the level of knowledge
of people - either underestimate or overestimate it), therefore the
communication must be tested, (one should no more release untested
communication than untested pharmaceuticals-
Fischhoff , Risk perception and communication
, Published 13th
December 2011 by Routledge.

Not being able to make accurate simulations of human behaviour in
scenarios of risk and accidents could probably be the one thing standing in the
way of achieving 100% safety assurance in all industry operations because
nearly everything else can be simulated and designed for.

Sources: Risk perception and communication by Baruch Fischhoff  

Dike Nwabueze Chinedu.'s picture

Technical developments have contributed greatly to improved reliability but are not sufficient on their own to provide the performance demanded of the highest integrity equipment. Every man-made system contains certain functions which are allocated to the person/operator, and failure to perform these functions
correctly or within prescribed limits can lead to systems failure. Human error can be said to be a failure on the part of the human to perform a presented act
(or the performance of a prohibited act) within specified limits of accuracy, sequence or time, which could result in damaged equipment and property or
disruption of scheduled operations. In practice, the successful system performance depends not only on the human component but on reliability of all components [1].

Human error could be from attention lapses, misperception, mistaken priorities,mistaken action, violation or sabotage. Automatic landing of aircraft, emergency shut-down systems, laboratory experiments can be classified as high integrity systems which its relibilty is highly dependent on ergonomics (human factor engineering) [1].

These factors can be detected by either a feedback mechanism or other means, thus, analysis of human performance and estimation of human error probabilities
require supporting quantitative data.

One way of limiting human factor induced failures is by designing 'forgiving systems'.


[1] Sue C and Robin T, : Safety, reliability and risk management: An integrated approach. 2nd edition.



Human Factors

One of the key
growing aspects within Safety is ‘Human Factors’. Human Factors is about environmental,
organisational, job factors and human and individual characteristics, which
influence behaviour at, work in a way that affects health and safety. Human
Factors consists of topics such as Safety culture, situation awareness,
decision-making, Leadership, group conformity, heuristics etc. A simple way to
view human factors is to think about three aspects: the job, the individual and
the organisation and how they impact people’s health and safety-related




Safety Culture

Safety Culture is
considered within the field of Human Factors. The U.K. Health & Safety
Commission (HSC) defined Safety Culture as “The product of individual and group
values, attitudes, competencies and patterns of behaviour that determine the
commitment to and the style and proficiency of an organisation’s safety and
health programs.” There are various definitions for Safety Culture, but in
summary it is a proactive stance to safety. The term ‘Safety culture’ first
came to recognition in the initial report by International Atomic Energy
Agency’s (IAEA) on the Chernobyl nuclear accident. It was discussed with other
major accident enquiries at the time such as the North Sea Piper Alpha oil
platform explosion and the London Clapham Junction rail disaster. Poor safety culture
was argued as the main determinant of those accidents.



Heuristics are
short cuts which are considered within the field of Human Factors. People use a
number of heuristics to evaluate information. They are useful shortcuts for
thinking but can lead to inaccurate judgements in some situations. This can be
due to lack of time, work pressure, increasing stress levels etc. There are
various heuristics such as:

Heuristic – Events that can be more easily brought to mind or imagined are
judged to be more likely than events that could not easily be imagined.

Heuristic – People often start with on piece of known information, then adjust
it and create an estimate of an unknown risk but the adjustment will usually
not be big enough.

between Gains and Losses – people take higher risks hoping nothing goes wrong
than they accept when trying to secure their gains.

Effects – People prefer to move from uncertainty to certainty over making a
similar gain in certainty that does not lead to full certainty.

Theories: It is a positive or negative feeling towards an object. The affect
causes evaluations of an object’s riskiness (rather than the other way around)

There is a strong
negative correlation between people’s judgements of the risk and benefit of an

Connie Shellcock's picture

Dike Nwabueze Chinedu previously said “One
way of limiting human factor induced failures is by designing 'forgiving
systems'.” I very much agree with this statement. I don’t think future
technology should try and eradicate human errors as it is my personal opinion
that they are inevitable. Instead technology could be designed with safety
nets, in such a way that human errors have been accounted for when designing
the system, be it a management system, technical system or other. I am studying
the Safety and Reliability Engineering Masters and we have to complete a whole
module just on Human factors which mainly discusses psychological factors that
come into working in the energy industry. Given that we have to study this
module in order to become a safety engineer just goes to show that human
factors is a pinnacle part of working in industry today.

Mark Haley's picture

Connie you are absolutely right! By understanding Human Factors you can start to build in measures and training to reduce risk. ALL major incidents (and minor ones) have an element of human factors in them, and they could all have been prevented if this had been identified.
I am a Human Factors Instructor in the Aviation industry and we have to understand the human element of everything we do, as consequences are severe and usually fatal. As a previous post in this blog mentioned, ‘to err is human'. Therefore, so long as we understand this and build the relevant procedures and systems to cope with human error, safety will be improved. But how do we do this??
When looking at Human Factors we need a pro-active system that encourages staff to report "near-misses" or "error-provocative situations".
With this information it is possible to build a predictive system that enables defences to be put in place before an incident happens.

This approach is now being introduced into more and more industries. The NHS is currently undergoing a major staff training programme into Human Factors as they have recognized not only the safety benefits but also the long term cost benefits (i.e. not paying as much compensation to patients/relatives). The aviation industry and military have used Human Factors training for a number of years and reaped the benefits from this. Certain parts of the Oil & Gas industry have picked up on Human Factors training but it has yet to penetrate all companies.

Mark Haley

Foivos Theofilopoulos's picture

In my opinion, the human part of every system is the most difficult to handle. Humans can always make better systems, better machinery and better mathematical formulas for reliability and predictions about the imminent (and even distant) future of what we build. However, predicting human reaction is something near impossible. This is why human factors is so important.

Getting back to the original example of the threadstarter, do not understand how you correlate human decisions with an integrity system, but human errors do not always attribute to lack of information. As Ambrose very well described it, even getting a right signal or piece of information does not mean that we will make a correct action or choice according to that information. Unfortunately, since every system is created by humans, we cannot create a system that can judge whether a decision taken is right or wrong and thus allow it or block it. We would have to account for every possible scenario, and even then you have unknown-unknowns, hazards and possible scenarios that have never been thought until they happen.

In terms of a system, one that in my opinion is overlooked but can give tremendous improvements is the job training with some kind of simulator, since not only it can help people deal with stressful situations, but humans always respond better to visual stimuli than by memorising entire instruction manuals and books.

Samuel Bamkefa's picture

The main difference that exists between human operations and machines/systems operations is what I will loosely call 'predictability'. A machine can be designed to do something, or given instructions, and provided it is appropriate for the purpose, will be expected to work. On the other hand, we can as well give instructions to humans who are appropriate for the purpose. Whether or not the instruction will be implemented and how is where the human factor comes in.

It has been remarked in a couple of posts that the many of the major accidents that have occured of recent have been due to human errors. I am of the opinion that human errors will continue to take a large percentage as the cause of accidents, especially in technological areas that have been around for a long time. This is because most of these processes have been tested, evaluated and utilized over time. As a result, most of the reasons for failure due to the non-human parts of the system would have been evaluated. Many of these systems have been designed to have a high level of reliability. Therefore for something to fail, either a design has been carried out wrongly by someone or someone has operated wrongly

On the other hand, for emerging technologies where all the modes of failure may not have been seen or understood, system failures can occur and this may not be attributed to human error. When nobody knows how you could have done it better, then I do no think it can be classified as an error


Mark Haley's picture

Samuel you state in your last post that ‘for emerging technologies where all the modes of failure may not have been seen or understood, system failures can occur and this may not be attributed to human error'.

The fact that we do not know what could go wrong does not mean that when something unknown happens it cannot be attributed to human factors. That is if something fails or an incident occurs you will always be able to trace it back to human factors, i.e:
- An incorrect engineering tolerance
- Wrong procedures in place
- Not enough data to recognize possible failures, etc.

It might well be the case that for emerging technologies that we see new failures that we may not have considered, but this is still a Human Factor. By understanding this and allowing your staff to recognize it, the hope is that small/new failures can be picked up early before they become disasters. This is the work of Human Factors - A proactive approach and a just culture that encourages safety through shared values, attitudes and behaviour.
It is very unlikely that you will ever be able to say that incident/failure could not be attributed to human error.

Mark Haley

Ike Precious C.'s picture

Good Comment Mark but I tend to agree with Samuel to an extent. Taking my cue from the Fukushima disaster in Japan, the diesel pump system was designed with respect to a maximum wave height (I think 10metres) but when the Tsunami/Earthquake occured, the wave was 14metres high and flooded these pumps such that they couldn't function. In my opinion, I'd rather say Unusual events than Emerging Technologies because these systems were designed with respect to worst case scenarios only for some natural disasters (the first of their kinds) to occur and belittle the worst case scenario as ordinary.

Nevertheless, I know fingers will always be pointed to the human factors involved because they could have designed for a greater or worse-than worst case scenario which could have reduced the level of damage caused or even prevent it. (Using the recent Hurricane Sandy that happened in the USA as an example, where it was predicted and early warnings were sounded) but my question is, "To what degree can you say that a failure to a system was inevitable, irrespective of the best reliability system put into use?

Mark, as a Human Factor Instructor, do you think that series of reports of "near-misses" and "error-provocative situations" will give birth to a system that will ensure that the reliability of the system can be maintained or improved, even when uncertain conditions arise, like Earthquakes of unusual magnitudes? Will the series of report give a clue of an unusual case arising that may/will compromise the reliability of the system?

Thank you.

Precious Ike

Mark Haley's picture

You raise an excellent question and I can only apologise for the delay in replying to it.

Let me first reply to your comment about the Fukushima disaster. As you point out the safety case was calculated on a maximum wave height of 10M and the wave that occurred that day was 14M.
We all know: Risk = Consequence X Probability

So in the case of a nuclear power facility, because the consequences of an incident are that much greater compared with a conventional power station the probability of less likely events need to be considered. So I would argue in the case of Fukushima that whilst the probability of a 14M wave was low it still needed to be considered because of the consequences. This reason it was not considered was because of a human failing to recognize this, or possibly it was recognized and then ignored. Either way it boils down to human decision making.

Your last comment raises the point I often discuss during human factors instructional sessions. As humans we do not like to be seen to fail or make mistakes and often minor mistakes are dismissed or hidden to protect pride or even your job!
However, with open and honest reporting of errors, mistakes and minor incidents a database can be built which will highlight trends. By spotting these trends early it is possible to mitigate more serious incidents. I have seen this many times in the aviation industry, where engineers, pilots or ground staff have reported minor incidents which have resulted in a change to procedures.

How do you know the change in procedures has done anything I hear you ask?
Sometimes the consequences of not changing procedures are obvious, other times you will only know if your change has worked if another operator does not change their procedures and then subsequently has a serious incident because of it.

At the moment I know that the aviation industry takes this reporting very seriously and the energy industry is trying to do the same, but it is still very piecemeal. To achieve reporting of minor incidents you need to have a just and fair culture and one of openness. This is easier said than done and through education and human factors training this hopefully can become a reality.

Mark Haley

Ike Precious C.'s picture

Thank you Mark for the reply. You indeed cleared my doubts in these areas. 

This may sound too curious, inquisitive or stupid but  where I come from, we have a saying that 'One that asks questions never loses his way' - To what extent do we consider human weakness(es) as being normal without it having a negative impact such as losing your job? 

I think, as you mentioned in your reply, if any worker will be at risk of losing his/her job or endangering his/her fellow worker from losing their job(s), reporting will not be effective as it ought to be.

You also mentioned towards the end of your reply, that Education and Human Factors Training can make such openness a reality; What are the areas of concern that the training focuses on? What will one stand to gain after such trainings because, to be honest, I have never heard of human factors training or is there any other name it guises itself under?

Thank you.

Mark Haley's picture

A very insightful question and the key to progress in Human Factors is a change in culture. Many still see mistakes or errors (there is a distinct difference between the two) as weakness, but you may have heard of the saying ‘to err is human', that is, to make mistakes is not weakness it is just human. By understanding how we make errors we can improve procedures.
Human factors training tries to demonstrate that as humans we will get things wrong, but if we understand that things will go wrong and how those things can go wrong we can recognise them and make changes. By realising that we are all fallible regardless of how much we know or how experienced we are, we can begin to be open about our mistakes and understand them.
One of the keys to good Human Factors training is the support for a fair and just culture from management. Employees need to know that they can report incidents without fear of reprisal or losing their job (unless of course it is due to gross misconduct or negligence). If the management are firm supporters of this type of culture then Human Factors training can make a real difference.
Two of the biggest employers in the UK, the MOD and the NHS, take Human Factors training very seriously and they have seen major results because of it. It is still relatively new but it is getting a lot more attention because of the results. As for another name for it, I am sorry but I am not aware of one.

There are many facets to Human Factors training and the process is a continual one through lectures and group dialogue. No individual training session is the same as another, as they are all tailored to the audience, because no group of humans is the same as another. What is gained from each training session is a deeper understanding of a particular aspect of the limits of the Human Condition.

I will give one example. We as Humans all perceive things in a different way. If you understand that others may not see something the same way you do, say for example a safety sign, then you will make sure that when you create your sign the information or description is totally unambiguous and then you will check it for understanding rather than assuming, and then possibly review your sign at set dates to ensure it is doing what it was designed for.
For example look at the picture below, do you see a pretty young lady or do you see an old woman??

Ike Precious C.'s picture

Wow... That means much effort will surely be involved in human factors training because Personalitites and Perceptions are dealt with here other than systems. 

From your example, I see a pretty young lady, though the side of her face. 

Thank you so much, for these clarifications Mark but be sure to expect more questions from me.

Thank you.

Soseleye F. Ideriah's picture

In analysing the safety considerations of any system, it is
important to as much as possible produce a design which is based on the worst
case scenario. For example, in construction, the design of a structural
component is based on the worst possible loading combination. These
combinations usually have an almost zero probability of occurring, however, the
combinations still occur in some extreme situations, leading to failure events.
The probability of failure is heavily reduced through this approach. This
should be emulated when considering the human factor.

The best approach to including the human factor in any safe
design, would be designing a “fool proof” system. Without predicting the level
of intelligence of system operators or other people that the system may expose to
risk, the design should consistently adopt a worst case scenario. Managing risk
effectively involves identifying all possible risks and developing proper
mitigating strategies. A robust risk mitigation strategy can only be developed
by considering a wide range of possibilities.


There has been tremendous strides in safety, reliability and integrity management processes in engineering when comparing to the industry in the early 1990s. Even though we have had serious accidents that have been costly in terms of human asset loss and capital loss, we can still say the engineering industry has made commendable strides in improving safety and reliability in the industry. A good argument is what are the main causes of the industrial accidents we still have today despite these advances in safety and reliability. What is this human factor we talk about? Human factor is the dimension that looks at human inputs into processes, it can be direct (e.g, following safety prceedures) or indirect (e.g, making decision or supervising activities). To help us get a good understanding I will like to draw an analogy with issue of world peace. Looking at the world history, there has always been conflicts and the issue of conflicts is still in our society. Despite the formation of Organization like the UN,EU, NATO, African Union, etc, to address this issue of conflicts we still see conflicts due to human relationship reasons, resource control,etc. Has the world advanced in peace, yes but are there still conflicts yes. So also today, despite the increasing standards in safety and reliability engineering, we are still having accidents due to human error either arising from failing to follow laid down proceedures or wrong judgement/decision. I believe with the constantly reducing available resources and increasing complexity of engineering needs and projects, addressing the human factor will be increasingly important to maintain and improve safety, reliability and integrity management in our industry today.



Uhunoma Osaigbovo

Subsea Engineering


Kwadwo Boateng Aniagyei's picture

The human factor, in my opinion will always be found to be a contributing factor
to system failures and accidents as it is humans who make all the engineering designs
and put the systems in place. Safety at work is a difficult and complex phenomenon
and the global growth of the industry has elevated the safety concerns of the industry.
Majority of the occupational accidents occur due to lack of attention given to
safety performance,
safety procedures and improvements of accident prevention methods. They may
also be due to lack of knowledge and training, lack of supervision and lack of rules implementation.
All these are in one way or the other influenced by human errors which also
lead to negligence, carelessness of workers, recklessness of workers and lack
of monitoring and controlling. As my colleagues have all previously opined, most accidents in the industry
are tied to human errors and neglect. In most of the incidents, the timely intervention of
the person(s) in charge or the right judgement or decision could have gone a
long way to reduce the severity of the accidents or totally prevent it from occurring
if possible. Though technological advancements have proved how dominant human
errors are in causing accidents,
some accidents have also been found to be caused by system/equipment failures, as suggested by some of
colleagues in the previous posts. However, these system/equipment failures are mostly due to
wrong engineering designs,
which is also a human factor. I agree that humans are bound to commit mistakes
but the presence of a well monitored, coordinated and planned system will reduce to an extent how
humans will affect industrial safety. Our regulatory framework should also have
the right rules and guidelines in place at all times rather than have a
framework whose existing rules are subject to the occurrence of an accident.



Oluwatadegbe Adesunloye Oyolola's picture

"Authorities" in many fields ascribe 70-90% of all accidents to human error. These estimates are misleading because they assume that a person should have taken (or not taken) a possible action but ignore whether that possible action was likely or reasonable under the circumstances. In many cases, the real source of the error is the design rather than the human - someone created a product, facility or situation where safety depends on unrealistic or unattainable standards of
behavior. When the inevitable error occurs, it is blamed on the human rather than on the flawed design. In short, designers sometimes expect the user to compensate for poor design.

Many examples could be called "human error," especially by the companies that designed the devices. If my "errors" caused a serious injury, they would doubtless be added to the assessment that human error causes 70-90% of all accidents. In reality, however, these are "design errors" that have become manifest through human action. In every case, the designer(s) violated one or more simple human factors principles and failed to plan for likely and foreseeable human action. Said another way, human errors are not random. It may be impossible to say exactly when a bad design will generate an error, but it
is possible to say both whether an error is likely and the form that the error will manifest.

In sum, the designers are using idealized and unrealistic assumptions about idealized and unrealistic human behavior to compensate for their design defects. Product manufacturers should be just as responsible for proper human factors design as they are for proper electrical and mechanical design. Unfortunately, some designers still
think of human factors as a secondary issue that does not require serious effort.

In closing, I do not mean to say that humans never make errors that lead to accidents. Moreover, functional or other pragmatic constraints sometimes force designers into compromises that are less than ideal (as with the universal remote.) Rather, my main point is that apparent cases of human error are often really cases of design error. Whatever the actually contribution of human error to accident causation, it is far less than the frequently estimated 70-90%.



Oluwatadegbe A.O

MSc Oil and Gas Engineering

Patricia Fleitas's picture

Despite the fact of all the effort that the industry makes to ensure the correct design by following international standards, still accidents can happened. Furthermore, analysing the correlation between Accidents and Time, Step Change Safety pointed out that overtime when the mechanical issues of new equipment is proven on working correctly and the procedures used are according with a high safety performance, then the human safety behaviour has a huge influence on the overall process. As a result, the root of causes of human mistakes must be analysed carefully.

In order to analyse the influence of human error, the following Ishikawa diagram was developed to find out how the perception, memory, decision making and action process ended on the catastrophic Piper Alpha accident (figure 1).  The example bellow doesn’t take into account all the human error, but it gives a clear introduction on how these key human mistakes contributed with the spiral of the disaster. Nevertheless, after the principal human error that initiate the emergency situation (work permits), design of the installations for a quick response under an emergency situation was the underlying causes that contributed with the escalation of the situation (Accommodation modules located close to processing modules).
However, the mentioned human error are not “individuals error”, it is mostly the result of safety culture into the organization. Hopkins A. (2002) pointed out that when the company has safety polices on their core values, then the atmosphere of safety culture is spread to every level of the organization and individual applies it as full time activity including outside of work. Once, the individual is formed under the culture of safety of the organization; then the management mindset is required to ensure the identification, develop of procedures and commitment to ensure that the work place is safe. The process is a continuous learning and improvements (closed loop).


1) Step Change Safe. “Changing minds: a practical guide for behavioural change in the oil industry” Accessed on 19/11/12.
2) Hopkins A. (2002), “Working paper. Safety, culture, mindfulness and safe behaviour: covering ideas”. Australian National University.

Note: Figure 1. Ishikawa diagram of human erros in Piper Alpha accident is attached in my account.




Kevin K. Waweru's picture

Yesterday (Sunday 25th Nov’ 2012) the world woke up to yet another work-related accident that is being reported to have resulted in over 100 fatalities in a Bangladesh clothes factory. Another similar accident in Bangladesh was reported in 2010 with 25 fatalities [1].

Although investigations are ongoing, the human effect has already been cited as a possible cause of the accident due to lack of proper safety standards, poor electrical workmanship and over-crowding [1].

Poor access delayed the emergency services from reaching the premises. This exacerbated the fire situation and many trapped workers waiting to be rescued jumped to their death as it was claimed that the factory had no fire exit [1].

Statistics available in the public domain show that the clothes manufacturing sector accounts for more than 50% of Bangladesh's £15bn export earnings with approximately 4,500 factories employing more than two million people [1].

From these and more historical statistics, an F-N Curve can be drawn to show the frequency of these clothes factory fire accidents in Bangladesh and their corresponding fatalities. Given the significant contribution of the clothes manufacturing sector to the Bangladesh economy, the use of ALARP is paramount to implementing wide ranging safety measures in order to safeguard this important industry.


Kevin K. Waweru

MSc Oil & Gas Engineering

Management and organisation of asset integrity and processes
have greatly improved after the lesson learnt from piper Alpha. The accident of
piper Alpha, although catastrophic, had significant impact on health and safety
procedures, improved and streamlined all personal judgement or limitation
brought within the work environment.

 There are laws and legislations
that govern safety and integrity of industrial assets and procedures; these
have improved the working standard of personnel working habit. The problem now
is how human factor affects or judge risk and access it. In the industry, risk assessment
technique was developed to understand and reduce any future risk or hazardous
operational act. Excellent communication has been noticed to be a major contributing
factor towards achieving a successful job operation. 




Leading from the question ‘is it possible to estimate the influence
of Human activity on the integrity, Safety and Reliability of Systems' I would
logically say yes, you can estimate the influence. However the main issue with
that is it would merely be an estimated. The accuracy of such an estimate is
nearly impossible to determine.

Using historical evidence and a series of calculations and
percentages etc I have no doubt that an estimate could be made, however I doubt
this would be reliable or cost effective. Humans unlike structures and
facilities are affected and influenced by their environments, emotions, self-preservation/selfishness

In a good way this does mean that they can be taught processes
and procedures and with the threat that comes along with these they will ensure
these will be done for if anything their own self preservation.

However these influences also lead them to laziness, forgetfulness,
complacency and other traits which if experienced during vital situations can
mean the difference between normal operation and a disaster.

I am sure that if you asked every human who was ever
responsible for any sort of accident they would tell one of two things:

 1. They got
complacent and did (or didn't) do what they did to cause the accident out of this

 2. They were unaware
of the circumstance of what they were doing.

Only one out of the second out of the two of these is manageable.

In the way of being able to track all risky or hazardous activities
it is very difficult. One would have to first be aware of them all, which in a
large facility is very hard to determine. Also the tracking of these activities
would no doubt be done by another Human potentially causing further error.

To have full tracking and checking of all
systems/operations every time they are used, before moving onto the next
operation is a very large job. A lot of time, planning and money would have to
go into such an operation and they likely hood is that regular procedure would
cover most incidents leaving only the very improbable. This improbability would
no doubt lead to it not being cost effective to implement such tracking systems.


Please reply if you agree, dissagree or have anything to add.




Liam SLaven

Mohamed H. Metwally's picture


activity Integrity management is a brilliant concept indeed…..

Something like that has already
been applied in the software programming industry. "Usability Test"
is used to the new software before it goes to market. The test is all about
selecting an average user of similar software who is asked to perform some task
by the new software while a video camera is filming how he behaves, interacts
and surprises when he realizes the differences between the software he is
familiar with and the new software....

why don’t we use the same in other industries to best understand human mistakes….?


Subscribe to Comments for "Topic 13: Safety, Reliability and Integrity Mangement Processes and the Human Factor effect"

Recent comments

More comments


Subscribe to Syndicate