User login

Navigation

You are here

Topic 11: Why is the FAR and SIR measures of fatality considered poor measures from the organisation perpective.

oseghale lucas okohue's picture

Discuss why the FAR and SIR measures of fatality rate in an organization perpective is considered a poor measure of assessment of fataliy rates within the organisation.

Comments

oseghale lucas okohue's picture

Adding more light to the discussion topic. From EG50S1 and EG501D course we know that measures like the FAR,  SRI and even the F- N curve have been used to assess the associated hazards involved in a working environs . While this method  are part of the risk measures method , it involves a more reactive approach to risk assessment because the failure event has occurred before assessing the risk involved. You will agree with me that this is a poor risk/hazard assessment mitigation approach. Can we discuss these limitations and prefer a proactive approach of ascertaining the risk involved in an operation before the failure events occurs and the harm  or consequence released.

Aaron McKenna's picture

I would like to in a way disagree with part of the statement
that SIR and FAR are poor mitigation approaches. I 110% agree that we must try
and prevent an accident before it happens but as we have discussed we will
never be able to achieve “absolute” safety and therefore there will always be
accidents, injuries and fatalities. With advances in new technology and systems
come a greater number of unknowns with regards to how certain systems may run
and potentially cause harm. In this sense FAR and SIR are simply one of many
tools used to try and locate the sources of hazards which in turn helps us to
try and mitigate them. It also helps the industries know what areas need
specific attention in trying to improve their overall safety. It must be
remembered that the SIR and FAR calculations are not the lone risk assessment.

Uchenna Onyia's picture

First i would like to point out that Aaron made some valid points about the impossibility of abslute health and safety and the fact that as new technologies emerge, the number of unknowns we know and those we don't know increase.  But i would just like to correct him on one fo his points.  SIR and FAR are not tools for locating the sources of hazard.  The source of a hazard is located during your risk assesment process.  SIL and FAR are simple statistical tools which help to predict the likelihood of an incident/accident occuring.

uchenna onyia 51232632

Mark Nicol's picture

Just like to expand on the hazards a bit.

There
are a couple options adopted by companies to evaluate the potential hazards for
the task that is about to be carried out. Typically there are as follows:

1.     
HAZOP
(hazardous operability)

2.     
HAZID
(hazardous identification)

HAZID:

The
HAZID is usually used in the conceptual and FEED (front end engineering and
design) phases of a project to identify potential issues with for example the
design of a particular piece of equipment. If we take a pipeline for example
the HAZID would look at things like material selection, design codes,
procedures etc.

HAZOP:

The
HAZOP looks at the potential hazards associated with for example systems, operations
or installations to establish the likelihood of any deviations from the
intended use.

Some
background information:

The
HAZOP was established by the heavy organic chemicals division of ICI [1] and
later adopted by various organisations. It’s quite interesting reading the
history section from reference 1, in that the Institute of Chemical Engineers
started a one week safety course in 1974, which included what is now regarded
as a HAZOP. After the flixborough disaster which came shortly after, the course
was fully booked for THREE YEARS.

 

 

References:

1.     
http://en.wikipedia.org/wiki/Hazard_and_operability_study

oseghale lucas okohue's picture

Thanks Uchenna on your post.

 

I strongly agree with you that the are statistic tools used to predict the likely hood of an accident occurring.

But this statistic tools uses a reactive approach to predict the undesired consequences involved. Because the harm is released and the effect of this harm has been caused already before the assessment is made to know about the fatality or severity rate.

Can you discuss on this and prefer other means that take a proactive approach of accessing and mitigation risk to as low as reasonably practice before they occur.

thanks

oseghale lucas okohue's picture

Before we deliberate on this topic further. We all have contributed a lot and in the light of this I would like us all to meditate on this statement:Lord Justice Asquith in Edwards v NCB (1949 ) opinion of ALARP on the Health and Safety at work Act of 1974. in which he said:Reasonably practicable is a narrower term than ‘physically possible’ and seems to me to imply that a computation must be made by the owner, in which the quantum of risk is placed on one scale and the sacrifice involved in the measure necessary for averting the risk (whether in money, time or trouble) is placed in the other and that if it is shown that there is a gross disproportion between them – risk being insignificant in relation to the sacrifice – the defendants discharge the onus on them. Moreover, this computation falls to be made by the owner at a point of time anterior to the accident ”Now in this light  you will all agree with me that the 1974 safety and health at work act which is still in use today have inferred that the risk involved in a particular work environs should be assessed before they occur i.e “computation must be made by the owner. Also you will also denote that this computation and assessment should be made before the harm is released to as low as reasonably practice i.e “Moreover, this computation falls to be made by the owner at a point of time anterior to the accident ”The ongoing argument is this…..FAR and SIR as said by uchenna, Aaron etc  are statistic tools used to predict the likely hood of an accident occurring. This I agree with. But the point is that the access the accident only if it has occurred. Most hazard potentials are released into the environs before a solution is been made available after accessing the magnitude of the potential hazard that has alreadu occured i.e the Aberfan colliery waste tip disaster of 1966, Piper Alpha disaster of 1988, the Deep water horizon of 2010. And so we keep learning after a serious disaster have occurred which we could have averted if we go proactively. The SIR and FAR methods entail a reactive approach to a solution. Lets discuss a proactive approach of seeing this and discussing ways and possible methods i.e tools, that take into account of unforeseen events and make plans to prevent their harms before they are released into the environs.Before we all share our opinion lets ponder on this statement made after the deep water horizon incident has occurred.The first progress report (May 24, 2010) concluded:“This disaster was preventable had existing progressive guidelines and practices been followed. This catastrophic failure appears to have resulted from multiple violations of the laws of public resource development, and its proper regulatory oversight.” The second progress report (July 15, 2010) concluded:“…these failures (to contain, control, mitigate, plan, and clean-up) appear to be deeply rooted in a multidecade history of organizational malfunction and shortsightedness. There were multiple opportunities to properly assess the likelihoods and consequences of organizational decisions (i.e., Risk Assessment and Management) that were ostensibly driven by the management’s desire to “close the competitive gap” and improve bottom-line performance. Consequently, although there were multiple chances to do the right thingsin the right ways at the right times, management’s perspective failed to recognize and accept its own fallibilities despite a record of recent accidents in the U.S. and a series of promises to change BP’s safety culture.”

WilliamBradford's picture

I don't understand the following part of your statement:

"You will agree with me that this is a poor risk/hazard assessment mitigation approach. Can we discuss these limitations and prefer a proactive approach of ascertaining the risk involved in an operation before the failure events occurs and the harm  or consequence released."
 
By my understanding of the topic, FAR and SIR values are merely statistical values used to represent the fatality and serious injury rates of the company, or organisation, in the past. As such, they are not methods of reducing accidents, but just a way by which you can monitor the organisation’s record, as it were. However, I partly see your point, by which the FAR and SIR values can be used to cause an organisation to sit up and say “Oh, we should probably do something about that.”  This would then become the reactive approach of which you seem to be talking about.
 

Igwe Veronica Ifenyinwa's picture

 Determining the fatal accident rate and serious injury rate, one needs data on previous occurrences. This makes the measure liable to doubt due to the fact that data is collected across groups exposed to different tasks. In this case I suggest that data should be collected only on events exposed to the same task.
Secondly, the event does not happen regularly, there are only a few recorded instances which are a very poor approach to reaching a final judgement.

Thus, ultimate care should be taken in establishing an estimate of the fatal accident rate or serious injury rate or and in using other published values.
It is also worth noting that the FAR and SIR does not reflect the risk associated with hazards that have not been released by a failure occurrence, thus giving a partial insight into the prevalent risk. In particular, it is a poor measure of major accident hazards and other infrequent events.

Aaron McKenna's picture

I would like to continue the debate about SIR and FAR. As
noted by Igwe in the above post, these calculations maybe too generic IF they
just cover the serious injuries or fatalities over a complete industry.
However, if we did want to make more specific calculations based on the
incidents during a specific task and therefore similar risk, then I propose
this could be done, just as it is calculated on a more generic basis. My main
issue with the rate calculations is that they do not take into account, the
number of incidents that the injuries and fatalities have stemmed from. Perhaps
including this someway in the calculation could give a more detailed value. For
instance, a specific facility may only have one accident that led to 10 deaths,
whereas another may have multiple incidents that cumulatively have caused the
same number of fatalities. This would be hidden by this calculation and for me
the facility with numerous incidents needs a lot more attention due to its
consistent failure. 

mohamed.elkiki's picture

I want to comment on Aaron that if the facility causes many deaths the company will for sure change it without anyone telling it because a facility that cause many deaths means also many loss to the company so for sure it will change it. However, the problem is that company will care for its money than the life to its workers and FAR and SIR are indicator to the company that something going wrong and need to change for the sake of money. So i think SAR and SIR are important for the company to look for its business and for others to look for their safety. Its just how each one look at what FAR and SIR represent. However, i agree that a probability or a certain measure should be made on the companies facilities because it will be indicator to know is the company change its facility for example for certain years or not and this will represent the safety percentage in the company. Another idea is that publicity of this results should be known to workers and employers in the company or even who intend to work in the company and this is important for two reasons. First, companies will be sure that this result  be good in order to get people work in it so they will care for the facility and everything. Second, safety will be insure.

Deinyefa S. Ebikeme's picture

Deinyefa Stephen EBikeme
IBIYF

JOHN BOSCO ALIGANYIRA's picture

I agree with what Lucas is saying.We can not actually rely on FAR and SIR for carrying out risk assessment in an organisation because it means if we are to rely on these,some fatalities and injuries have to occur first so that we can ascertain the probability of a failure event happening however various computer models/simulation models can be used to supplement the already available methods of assessment forexample computer models can be used to assess the potential of a hazard  inflicting harm or damage to persons ,property or the environment  by considering various parameters such as temperature and flowrate among others,a case in point is assessing the fluid flow through a pipeline under different conditions.All these models are done during design stages to devise means of mitigating risks that may arise during operation.Success in managing major hazards/risks can not be measured by occupational health and safety statistics but by measuring the performance of critical systems used to control risks to ensure they are operating as intended and that is where computer models become relevant.Monitoring the  performance of Critical Systems in any organisation/industry plays a key role in assessing and controlling risks and this needs to be everyone's responsibility in an organisation.

Regards,

John Bosco Aliganyira

Msc.Oil and Gas Engineering.

Toby Stephen's picture

Very much in agreement with John's post and I'd like to highlight something he said which was accentuated in our lecture from James Munroe:

Success in managing major hazards can not be measured by occupational health and safety statistics but by measuring the performance of the critical systems used to control risks to ensure that they are operating as intended.

In my eyes this is a very telling point and I'd like to know more about how the performance of these critical systems is actually measured (if anyone can help?). A more thorough, non-reactionary measure of performance standards would be to use leading indicators. For example, many companies now record near misses, and this provides insightful data on changes that can be made pre-emptively before a serious incident or fatality occurs. For example, if 100 company cars use a stretch of road daily (ie while passing between 2 different well pads) and that over a time period the amount of near misses increases, this is a leading indicator of things to come. As such, the company can make the required adjustments (ie if there is a poorly signposted sharp corner then they can easily put up a new sign) before any incidents take place.

Although this clearly isn't without flaw, it does allow some pre-emptivity to be built into the safety process.

--

Toby Stephen
MSc Oil & Gas Engineering

Andrew Allan's picture

Toby,

In response to your question on how the performance of safety critical systems is measured I'd like to discuss the role of Safety Critical Elelments (SCEs) and Performance Standards under the UK Safety Case Regulations. 

Any new installation, or significant modification to an existing installation requires the assessment and identification of all safety critical elements.  A safety critical element is defined in Regulation 2 of the Safety Case Regulalions as:

   "such parts of an installation and such of its plant (including computer programmes), or any part thereof:

            a) the failure of which could cause or contribute substantially to; or

         b) a purpose of which is to prevent or limit the effect of, a major accident." [1]

Identification of SCEs allows performance standards to be developed for each SCE.  A performance standard is a specific goal, or set of goals supported by objectives which aim to ensure that the SCE will be able to meet its goals when required.  A performance standard should define the desired functionality, availability, reliability and survivability of a SCE, along with identifying any interactions with other systems.  The goals and objectives within a performance standard should be SMART (Specific, Measurable, Attainable, Realistic and Time bound).

In identifying the goals and performance criteria of a SCE during design, this allows verification to be undertaken during design, installation and operation to ensure the SCE continues to meet its goal. 

For instance, the deluge system on an offshore installation is classified as a SCE as it's purpose is to limit the effects of a fire and increase the likelihood that people can successfully muster or escape from the platform. In order for a deluge system to be effective it must have a reliable supply of firewater. It must activate within a certain period of time and have sufficient coverage at a sufficient flowrate to mitigate the effects of predicted fire events.  In defining these performance standards the deluge system can be designed accordingly, tested during commissioning to ensure it meets the design requirements, and tested periodically during the life of the installation to ensure it continues to meet the required performance standard. If it fails to, for instance if the firewater pump has degraded and is not supplying a sufficient flow, or some deluge nozzles have blocked and an area is not being covered, then these things need to be addressed in a timely manner.

The HSE requires that these performance standards are verified by an independent third party to ensure compliance.

1 - http://www.legislation.gov.uk/uksi/2005/3117/contents/made

Andy

Andrew Allan's picture

Another technique being used more widely in the oil and gas industry is condition monitoring of equipment and systems to help predict failures and perform maintenance in advance thus avoiding the consequences of failure.

This is particularly useful in instances where there are a number of the same items of equipment.  In monitoring  characteristics such as vibration or temperature, performance can be continuously recorded and the performance of equipment trended over time.  This allows an accurate picture of the degredation of a piece of equipment to be developed, allowing the estimation of when failure will occur, and to allow maintenance to be performed in advance of this time.

Not only does this mitigate the likelihood of major accidents due to equipment failure, it also increases production efficency as maintenance can be planned rather than being reactive.

Claire Snodgrass's picture

Toby,

Oil & Gas UK (2012) provide a couple of key performance
indicators that are examples of "measuring
of the performance of critical systems used to control risks"
.  These are in line with HSE's findings from
the KP3 Asset Integrity programme that I've discussed elsewhere.

The safety critical elements that Andy has described have to
undergo a verification process by an independent competent person.  By ranking the verification findings levels one
to three in line with the classifications used by the verification bodies, the
verification findings can be used a key performance indicator.  Oil & Gas UK report the total number of
open level 3 (the most serious) findings for all installations.

A further, more leading indicator used by Oil & Gas UK in
relation to safety critical equipment is the maintenance backlog, as
maintenance is vital in ensuring the equipment remains fit for purpose.  Oil & Gas UK has so far reported this in
terms of manhours of backlog, but for better clarity and understanding will report
it from 2012 as a percentage of the total safety-critical maintenance.

Reference: Oil and Gas UK. (2012) Health and Safety Report 2012 [Online].  Available at http://www.oilandgasuk.co.uk/cmsfiles/modules/publications/pdfs/HS074.pdf
[Accessed 22nd October 2012]

Normal
0
false
false
false
EN-GB
X-NONE
X-NONE

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin-top:0cm;
mso-para-margin-right:0cm;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0cm;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:"Times New Roman";
mso-bidi-theme-font:minor-bidi;
mso-fareast-language:EN-US;}

Adejugba Olusola's picture

A prevalent method in use in the UK Oil & gas industry is the use of major hazard Key Performance Indicators to provide reasonable assurance of the provision and maintenance of robust major hazard management measures.

A common KPI being recorded and monitored across the industry is Hydrocarbon Releases or Loss of Primary Containment. This was one of the responses of the UK offshore oil & gas industry to the results of the KP3 and it was to develop additional asset integrity related key performance indicators (KPIs). KPI 1 - Hydrocarbon Releases is a third indicator of the indicators mentioned by Toby & Claire being monitored by the HSE across the industry. Even though this is a lagging indicator, it is still being used in a pro-active way to manage risks of fire & explosion in the oil & gas industry especially offshore. In 2010, a Step Change in Safety initiative was agreed by the member companies to achieve a 50% reduction in the number of reportable Hydrocarbon Releases (HCRs) by the end of March 2013. April 2012 evaluation shows there has been a 40% decrease in major and significant releases over the last 2 years{1}.

However, outside of the above mentioned KPIs, the oil & gas industry is favourably moving towards the development and use of more Asset Integrity KPIs comprising both leading and lagging indicators to provide indication to both internal and external stakeholders on their process safety risks. The International Oil & gas Producers (OGP) has produced a recommended practice on Process Safety Key Performance Indicators{2} which is typically used in conjunction with API-754  RP – Process Safety Performance Indicators for the Refining and Petrochemical Industries{3}.

Reference

1.       http://www.oilandgasuk.co.uk/Health_Safety_Report_2012/asset_integrity_kpi.cfm

2.       OGP. 2011. Process Safety – Recommended Practice on Key Performance Indicators. Report No. 456. http://www.ogp.org.uk/pubs/456.pdf

3.       API. 2010. ANSI/API Recommended Practice 754: Process Safety Performance Indicators for the Refining and Petrochemical Industries

Adejugba Olusola

Mostafa Tantawi's picture

Mostafa Tantawi
Masters Of Subsea Engineering, University of Aberdeen

Well in my opinion FAR and SIR are not enough, no Risk assessment or
safety analysis should be based on them, they are just performance indicators
of how the safety system is going but they do not measure the true safety of
the system, they seem to correlate with the "occupational” or “personal”
safety, but do not seem to correlate to the occurrence of major accidents. A
good example on that is the Macondo oil spilt, both BP and Transocean had excellent
records on the KPIs being collected at the time of the Macondo incident.  Indeed the very day the accident occurred
there was a celebration on the rig of its safety record according to the then
available safety KPIs.  At the end of the
year Transocean top management was eligible for bonuses based on the “safety
record”, determined by existing KPIs, in spite of the Macondo

So SIR and  FAR are key indicators
on how the past safety regime was going, but they cannot prevent future
accidents.

Deinyefa S. Ebikeme's picture

It is not ideal to say, FAR and SIR are poor measures tool for risk assessment simply because they do not reflex the risk associated with hazards that have not been released by a failure event but rather FAR and SIR are simple, quick and easy way of having a general overview of the safety awareness and performance level comparison of various installations, companies, job-functions and industries for regulators to know if the applicable legislations are implemented, practiced and also serve as a focal point to optimize these risks.
Also they give the organization a bigger picture of their safety performances across all functions. This can only be done for the known (historical events). Please reference to EG50S1 lecture note 3 - measures and HSE website (www.hse.gov.uk/risk) for more detailed understanding.

Deinyefa Stephen Ebikeme IBIYF

Samuel Bamkefa's picture

I will like to agree with Toby Stephens on the need to have a proactive approach to safety rather than one that measures the results. But then, I also agree that there have to be some kind of indicators to measure safety performance, but with that, more can still be done (beyond FAR and SIR) to make the measurements more comprehensive

In addition to all that has been said, I will like to point out the following flaws that I find with these indices:

1. FAR and SIR measure only fatalities and serious injuries. By using these indices we are assuming that safety is only compromised when people die or are seriously injured. Some occurences do not necessarily result in immediate serious injury or death. There could be minor injuries or even near misses. In addition, some events give rise to potential harm, but not immediate harm to people. An example is a process that causes gradual inhalation of dangerous substances which can have effects that may not manifest in years on people. Based on these, a system that also assesses the potential for harm to be done to the people or environment will need to supplement these indices

2. I take a cue from the statement credited to James Munroe and quoted by Toby Stephens that ' Success in managing major hazards can not be measured by
occupational health and safety statistics but by measuring the
performance of the critical systems used to control risks to ensure that
they are operating as intended.'

To shed more light on this critical systems, I use an example of pipe assembly system that is meant to be hydrotested to a particular level before being deployed for use. A measure of the critical system will be to ensure that the system was indeed tested to the appropriate pressure for the normal duration, while a statistics approach will be to check if someone has actually been killed or wounded by the system during normal operation.

There are other factors that may bring down the FAR and SIR which are outside that fact that the systems involved are safe.

Lastly, a simple dictionary definition of safety is 'freedom from danger' and not 'escape from danger'. To me, the later is what SIR and FAR indices show.

Samuel Bamkefa

SON CHANGHWAN's picture


As said above, FAR and SIR itself do not
much help for safety management. Needless to say in a view of proactive
approach, I doubt whether this data could be directly used for decision to any
rectification or improvement plan for their system safety after incidents. Each
case’s root cause analysis and lesson learned would be some feedback for
management which means case detail can contribute for systematical safety but not
figure itself.

At least, it may be used for authorities’
notice or administrative measures because this is easy to compare within
industry. In UK, “
The Reporting of Injuries, Diseases
and Dangerous Occurrences Regulations 1995 (RIDDOR)”
regulate employers’ obligation for reporting work-related accidents e.g.
death, major injuries.

Some type of near miss reporting is also
part of the regulation. I believe this inclusion is very important. Even though
near miss events could have meaning as an indicator for future fatal accident,
it is hard to be caught because of no loss. Furthermore, all near misses are
encouraged to be recorded for good management. As a result, Near misses could
be a raw data for their system diagnosis. So root cause analysis and correction
work for near miss case would be help for fulfilling proactive management.

Reference

[1] http://www.hse.gov.uk/riddor/index.htm

 

Regards,

SON, CHANG HWAN

Kobina Gyan Budu's picture

To
share some more light on this discussion, Risk Management involves Analysing Risks involved in an
operation and based on that analysing
the decisions options available to enable a conclusive decision. Risk Analysis
involves the use of industry recognised tools such as Statistical Inference, Probability
Models
, Reliability Theory and Expert Judgement to identify potential
hazards in events, their probability of
failure
and the associated undesired
consequences
should the events fail [1].

The
formula below defines risk.

Risk = Pf
x E[C], where Pf is the probability
of event failure and E[C] is the
undesired consequences.

In
conducting the Risk Analysis, two of the statistical inferences are the Fatal
Accident Rate (FAR) and Serious
Injury Rate (SIR) which are
respectively defined below:

·       The
fatal accident rate (FAR) is a measure of the risk present from hazards that
have experienced actual failure events that resulted in at least one fatality.
The FAR is usually quoted as the number of fatalities that occur in a defined group of people per 108
hours of exposure to the activity (108 hours of exposure corresponds
to roughly the total number of hours worked by 1000 people during their working
lives)[2].

·       The
serious injury rate (SIR) is a measure of the risk present from hazards that
have experienced actual failure events and the undesired human consequences in
the form of serious injuries. The SIR is also quoted as a function of 1×108
exposure hours, and is calculated in a very similar manner to the FAR [3].

By their definition, FAR and SIR look at undesired consequences of
failure events in a defined group of people in a defined exposure hours. Also,
they are indicators that help equip stakeholders in deciding on what sort of preventive measures (barriers) to put
in place to prevent similar events from failing, and should they fail despite
the preventive measures what mitigating
measures
will be needed to minimise the effect of the undesired consequences. They are more or less leading indicators, reminding
stakeholders that there is a possibility of certain event failing since they
have failed before, and shows a picture of the undesired consequence as a
result of the failure.

Therefore, FAR and SIR are not themselves Risk Measures as some seem to suggest (maybe the heading of University Lecture note 3, Safety and Risk Measures is misleading). I also agree
with those whose views are that FAR and SIR are tools that help to predict the likelihood of an event failure. It
is worthy to note that a predictive tool is most likely to use historical or present data to pre-empt the future, and in risk assessment,
history will always be the past failed
event
(accidents) and their undesired consequences. That will not represent
a reactive measure as it rather uses
the past event to inform pro-active
decisions
.  

Looking at FAR and SIR in the context of what they stand for and their
role in risk assessment, I strongly believe that they are very potent
tools in today’s risk assessment process
.

1.      (Reliability and Risk Management, Industry Lecture
Note 1, p19)

2.     (Safety
and Risk Measures, University Lecture Note 3, p1)

3.     (Safety
and Risk Measures, University Lecture Note 3, p2)

Normal
0

false
false
false

EN-CA
X-NONE
X-NONE

MicrosoftInternetExplorer4

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:10.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:"Times New Roman";
mso-bidi-theme-font:minor-bidi;
mso-fareast-language:EN-US;}

Babawale Onagbola's picture

I think we should differentiate in the context of the discussion, the concepts being examined. FAR and SIR are the best tools out there available to companies, governments and stakeholders generally to identify and filter accident/incident rates against injuries/fatalities by gender, country and economic activity. They are statistical tools that accept historical records(of fatalities, injuries, staff strength) and inform judgement on which economic activities are most likely to result in fatalities or injuries(depending on what information the proprietor requires). However, both tools should not be used alone as they dont take into account the specific roles or functions that result in the accidents recorded AND they dont take into account accidents that occured but did not result in fatalities and injuries. An example, a company involved in downstream operations can in one year, account for over 100 accidents and fires from product distribution, retail fuel outlets, depot storage of products etc but not record any fatalities. When FAR indices are considered for example by the government, it would seem that the downstream industry being considered is relatively safe which in reality is not.

On the other hand, I do not think these tools should drive decisions on investment in safety mechanisms in companies. I think regulators should continuously apply pressure on companies to upgrade safety tools and mechanisms and to ensure strict compliance with maintenance schedules regardless of the data provided by tools like FAR and SIR. If regulators do not ensure stay on their heels, companies like the one described in my example, would relax and safety standards would start to drop.

oseghale lucas okohue's picture

I quite see reason with everyone’s contribution. Before we go further let’s have a look at today’s EG50S1 and EG501D class on statistic and probability theory. Our course lecture Dr Tan examined probabilities of failure for a structure when different loads are applied to it.The different probability assigned to each node was a fact that it has occurred before in history. Assuming that this structure was a newly installed subsea production system i.e.  Christmas tree, if we were to access the probability of failure we would have to break it into different small units and perform a critical failure analysis and chances of probability of failure on each of this system. The question now is where we get our probability or chances of failure data from since it’s a new production subsea tree assuming that the field is a new field too.

Henry Tan's picture

A very good and challenging question! Can anyone reply?

Thomas Ighodalo's picture

"Everything we hear is an opinion not a fact"

in reply to the question raised "where we get our probability or chances of failure data from since its a new production subsea tree, assuming that the field is a new field too"

in analysing ths question, we need to define failure event  which is simply defined "as a loss of function or component of the system being analysed" [1]  the next question is how are failure data generated -a typical source will be from laboratory test under the design conditions of the component i.e under controlled environmental/operational stresses.

Thus  its safe to state that even though its a new production subsea tree with seemingly no existing failure data available, the entire subsea tree (system), can be broken down into the various components whose failure data can be sourced from existing failure data for analysis i.e a valve within the subsea tree will be treated independently i.e failure data on the probablity of failure to open or close on demand can be sourced from existing data. The tricky part of analysing the system will be to determine if each component is statistically independent of the other and thus the overall Probability of failure of the system (subsea tree) can be determined.

 
A simplistic alternate approach is to identify the rate determining component within the system (subsea tree) i.e a component with the highest Probability of failure- the value will then be assigned to the entire system.

 

References:

[1] BASIC CONCEPTS IN SAFETY ENGINEERING AND RISK MANAGEMENT (EG50S1 & EG501D Note 2)

Leziga Bakor's picture

I think we can get our probabilities and chances of failure data form the manufactures. When components are manufactured, the manufacturer carries series of tests on them to determine their reliability and other performance indices. From these tests, the required probability and chances of failure data can be estimated. If the manufacturer has no such data, we can carry out tests on the equipment and also use simulators to simulate different conditions in which the component is expected to encounter during its service life. These tests will enable us to estimate the chances of failure of the components. In reality these test are carried out by the manufacturers so we would no need to do them ourselves.

Trevor Strawbridge's picture

I understand the dilema Osegale, but surely the probabilities of failure would be derived from a Finite Element analysis at the design stage. Then many of these probabilities are engineered out, eg. axle loads, snaging loads, fishing snags etc and hence suitable design applied with redundancy. The data that would be requred could include the sea/environmental conditions, the application of the structure, the weight limitations, soils data, activities in the area, the "on bottom" stability, etc

Regards

 

Trevor

Tony Morgan's picture

Basically it may be new but its components are probably derivatives of previous designs........

RELIABILITY DATA SOURCES

There are many ways to skin a cat as they say and obtaining
reliability data basically uses a combination of all of them across industry
depending on the product or assembly or problem and the circumstances ,
environment and potential failure modes.

Certainly in the subsea industry the propensity of larger
organisations is basically to use in-house data collection methods to provide a
closed loop feedback system to designers and as near as practical analysis of
the closest operating conditions eg for the subsea tree you will find that the
main manufacturers all have FRACAS [4]  or XFRACAS [5] type databases of collated
operational information pertaining to their different tree types,
configurations and operating conditions. Failure or Performance data is then
gathered over a period of time to provide the required failure rates and enable
valid extrapolation of data to provide an indication of future performance. As
noted this only helps to guide based upon what has gone wrong in the past or
may go wrong in the future and so directs the research and development parts of
the organisation toward the correct area of focus or most critical parts or
components requiring detailed analysis for improvement. This can take the form
of accelerated life testing [6] which assists in providing the keys to the
unknowns based upon FEMECA work leading to simulation and qualification testing
activities that seek to provide confidence in designs subject to foreseeable environmental
changes prior to operation.

Oil Multinationals / Operating companies basically drive the
improvement and need for reliability of subsea systems providers by
contributing to OREDA [1] a JOINT INDUSTRY PARTICIPATION (JIP) project centred
around collection and collaboration of reliability data across industry and
providing handbooks [3] or statistical data [2] which smaller companies can
make use of for their own design development and reliability analysis work to assist
in qualifying their products and assemblies prior to operational use and becoming
part of the industry feedback loop.

A key failing so far has been that it is difficult for the
smaller companies to provide reliability data since many times they are not
advised of their product failing and even if they are many instances result in
a lack of good root cause analysis due to the pressure of time and cost of
investigations.

This is the reason for these activities to be both driven and
supported by the companies who make the most money out of the reliability
improvements. As engineers there is obviously always something to learned from
failures or performance reports and the greatest benefits can be gained if there
is a cultural shift from identifying  the
cause of failure for the purposes of blame and understanding the cause of
failure to prevent it’s re-occurrence in future. The cost of this must be
shared between the parties as both parties have something to gain .....this
leads to the need to develop the supply chain relationships to allow this to
happen.  

From my recent few years experience with BP as a client i
believe they must be commended for their early adoption and pioneering of this process
( long ahead of Macondo!!!) , latest promoters and developers of this holistic
approach to risk and reliability throughout the supplychain (which is now an
API 17N and ISO std are noted below[7].

 

[1] SUBSEA ENGINEERING - OREDA - http://www.oreda.com/

[2] Industry studies - http://oilproduction.net/cms/files/319AA.pdf

[3] Electronic Parts - http://www.reliabilityeducation.com/intro_mil217.html

[4] FRACAS - http://www.weibull.com/hotwire/issue122/relbasics122.htm

[5] XFRACAS -  http://www.reliasoft.com/xfracas/index.htm

[6] Accelerated Life Testing - http://www.weibull.com/basics/accelerated.htm

[7] LATEST RELIABILITY IN SUBSEA - http://www.astrimar.com/news.html

tony morgan

oseghale lucas okohue's picture

Karin I strongly agree with you. Quoting from your post: “Would be easier to obtain a system P(F) if the P(F)s of single units were statistical independent, rather than determining the system P(F) considering conditional P(F)s of units (a lot more combinations with a lot of unknowns, especially if there are 'newer' units)'  Maybe is possible to analyze the different system components using both statistical methods of dependability of thesystem and independability of this system componentchances of failure. So that we could be sure of the estimatedlikely probable chances of the system failing either independently or dependently. Because just as karin said it look quite theoretical and mathematical to me too. Can someone in the house through more light on the possibilities of analysing the system if they depends on each for failure to occur.Especially if we don’t have data’s available to infer from.

Kobina Gyan Budu's picture

Everyone, the discussion is getting more interesting day by day.
It is difficult to agree to any saying that FAR, AFR, SIR, IR, PLL and F-N curves are poor
measures and reactive from industrial perspective. If they were, the industry will not be using
them. These are predictive tools that inform the likelihood of certain event failures (accidents) in
the future, giving management the opportunity to institute measures to prevent their occurrence
as well as mitigate the impact of their undesired consequences should they fail.
 
Predictive analytics is an area of statistical analysis that deals with extracting information from
data and using it to predict future trends and behaviour patterns. The core of predictive analytics
relies on capturing relationships between explanatory variables and the predicted variables from
past occurrences, and exploiting it to predict future outcomes
http://en.wikipedia.org/wiki/Predictive_analytics [copied on 16th October, 2012].

As predictive tools, they can only rely on historical data to effectively function in their own right.
By their nature, they are not reactive because their effect is not meant to be felt on the past
accidents but on the future.

It is however important to note that these tools have limitations via data inhomogeneity and the

fact that they do not consider risks from past hazards that did not release event failure.

Monday Michael's picture

For a better understanding of this topic, I would remind us all of the definition of these terms:

 

Fatal Accident Rate (FAR) is a measure of the risk present from hazards that have experienced actual failure events that resulted in at least one fatality in 100000000 exposure hours while Serious Injury Rate (SIR) is a measure of the risk present from hazards that have experienced actual failure events resulting in serious injury over a 1000000 exposure hours [1].

 

Evidently, both FAR and SIR are measures based on the occurrence of an actual failure event (s) which would have resulted in either a fatality (s) or serious injury; this implies that the underlying safety principle is reactive (control) in nature as opposed to proactive (preventive). By extension, this also implies that such organizations will consider their workplaces to be inherently safe until a serious injury or fatality occurs and is reported.

It therefore goes without saying that the vast majority of the cases of near misses, which are often not reported by workers for fear of being victimized, would not be reflected in the SIR and FAR. This is in contravention of the Reporting of Injuries, Diseases and Dangerous Occurrences Regulations (RIDDOR) of 1995[2]. Some of these near misses are potential accidents waiting to happen, requiring only a failure event and then leading up to major undesirable consequences.

 

Proactive hazard identification techniques such as Hazard and Operability (HAZOP) study and What-If analysis should be encouraged instead and they should be used in conjunction with the FAR and SIR to increase safety in the workplace. HAZOP is a structured analysis of a system, process or operation carried out by a multi-disciplinary team, it involves the stage-by-stage or line-by-line examination of a firm design for the process or operation using a set of guide words [3]. The HAZOP process will not only discover hazards and consequently risk in the process/system but also operability problems [4].

    

REFERENCES

[1] Tan, H (2012); Lesson Notes on Fundamental Safety Engineering and Risk Management Concepts

[2] http://www.hse.gov.uk/riddor/what-must-i-report.htm

[3] Crawley, F; Tyler, B; Hazard Identification Methods, Institution of Chemical Engineers, UK, 2003,p.60

[4] http://www.hse.gov.uk/research/crr_pdf/1991/crr91026.pdf

Kobina Gyan Budu's picture

Talking about getting data for estimating the likelihood of failures in new units for which historical data
does not exist, the industry have ways of establishing these figures. For example, in manufacturing
electric bulbs, there are series of laboratory tests that are conducted. Some of the processes include
testing X number of bulbs in every Y number produced and recording the failures and successes. After
testing a large number of them, statistical analysis including sigma methods is used to establish the
likelihood of failure and the figures assigned.

Some of these analysis lead to reliability/consistency/accuracy values assigned to equipment normally
quoted as “±” (+ or -) in the material data information sheets.

Kobina Gyan Budu's picture

Monday, while I agree with you on the HAZOP study and What-If analysis, I still disagree that FAR and
SIR are reactive. Remember these tools (FAR, AFR, SIR etcetera) are not meant to immediately respond
to the past fatalities or injuries, so they are not reacting to the past event failures. They are only using
the lessons from the past event failures to look into the future. However, it is prudent that the FAR, SIR,
AFR etcetera be used in conjunction with other tools such as the HAZOP, What-If etcetera to get a
robust risk management system in place.

The HAZOP is normally incorporated in the organisation’s Safety Case and reviewed from time to time
as the working conditions are dynamic. If for example, at the time of reviewing the HAZOP, there has
been an event failure that resulted in either a fatality or an injury or both, will you not consider it in the
revised HAZOP? If you do, will you now say that the “Fantastic proactive tool HAZOP” has suddenly
become reactive?

oseghale lucas okohue's picture

Hi Katrin, Trevor and Kobina At this point I am taking a deep breath and trying to evaluate critically on your post. Before I comment on each. Let’s have a preview of EG50S1 and EG501D lecture held on the 16th of October. In this lecture we used Bernoulli probability theory to forecast into the future of a pump tendency or likely hood of failure or its reliability over a given period of time. By painstakingly assessing the system failure of each component and then apply the probability rule to assess its chances of failure.  Now, if we argued that the initial probability or chances of failure or reliability of a given system can be analysed during its design or inception stage as posted by Kobina and that initial data i.e sea/environmental conditions, the application of the structure, the weight limitations, soils data, activities in the area, the "on bottom" stability, etc. might be used to judge and assessthe initial tendency of likely hood of failure or reliability of the system. I agree to disagree that this is a prescriptive approach to assessing initial probability data of a new system failure as it is a general approach and different system may behave differently in different work environs depending on what it is subjective to. For example maybe the probability of a valve in a plant to fail was estimated to be 0.06 within a year by design. If this valve was bought by two company A & B and it is to be used for 4years by the manufacturer. Now during 2month usage of its engineered designed life it failed in Company and maybe it never failed with Company B even after 4years of usage. How do you justify this Kobina? We need a further clarification from you pls. Katrin extensive testing of the system specimen during the design and manufacture stage. Looking at different possible modes of failures and also using finite element analysis as suggested by Trevor.I see this approach as a more goal oriented approach. I clearly see reasons with this. Trevor can you expanciate on this finite element analysis of yours to the house pls? I am suggesting that after severe testing of the system specimen during the design and manufacture stage,  that the manufacture should put a level of safety factor in other to cope with the unseen tendency of failure. Friends what do you think about this and how can the bernoulli principle help us to forecast into the future after this is done or so. Can we discuss on this

mohamed.elkiki's picture

let me LUCAS try to answer those two questions that u asked to KOBINA and TREVOR:

1-
"For example maybe the probability of a valve in a plant to fail was
estimated to be 0.06 within a year by design. If this valve was bought
by two company A & B and it is to be used for 4years by the
manufacturer. Now during 2month usage of its engineered designed life it
failed in Company and maybe it never failed with Company B even after
4years of usage."

 about u example there are certain things i
don't understand. first, how the valve probability to fail after a year
is 0.06 and the two companies bought it to work for 4 years? then
probability for valve to fail  after those four years will be very high
so it will never be good for company B after 4 years and it must have
been failed. and about company A, it can be fail as there are already
percentage of failing as we didn't say when they get it the probability
of failing is zero. Therefore, it can happen that valve will fail after
two months but by how much percentage this is what engineer do to get
the lowest risk probability but will never be 100% sure that it will
work. Also, it depends on where u bought the valve and no company take
the valve and use it directly they keep checking on it before using.

2-
about factor of safety, there is safety factor in all petroleum
equations. For example, for jack up rig when setting the weight of each
leg, they take safety factor so if any thing happened and the rig
started to put more load on one leg than the other, it will not affect
the whole rig because it still be in range of safety factor, However, as
u said every system is different than the other and also same for their
environment around. that's why engineer shouldn't just plug in software
and put in equations, they have to understand the theory behind
equations so they can update it with the system they want. for example,
each company doesn't use one system and safety factor differ from
company to another depending on where the company work and the system in
it.

i hope my point was clear and i want really to know others opinion. its very critical issue

mohamed.elkiki's picture

a very critical question that LUCAS also said is what if we have new system and in new area and we don't know its accurate probability. what i think that we have two options either to make assumption and its accepted because we are not the only one who check this result and a lot of people revise it so if all agree that we took reasonable assumption its OK and we can use the material. Another solution is by relating the material to the most close one to its specification. For example, when company want to buy new land to explore they started to search in areas where around them another companies discover hydrocarbon. most companies make statistics and decide which is the best location depending also on probability of finding hydrocarbon. companies know quite well that i took risk and they also can get money from bank to buy the new concession but if we think it by business way, we will know that projects are measured by risk probability and profit also. Therefore, companies can try new tools for first time and take risk of using it but experience also is count and companies make this only depends on the engineer who work in it and test the tool before using it in real field and depends on his experience, he can know risk probability of using the tool even if it was first time to use it. Petroleum industry is all depends on uncertainty that's why big companies like BP, SHELL, and other most of engineer they took are experience engineers who can not only have knowledge but also have reasonable estimate for everything.

Ahmed_Abdelkhalek's picture

Finite element or pushover analyses  can be used to assess the probability of failure of a components (or a whole structure) when subjected to a certain event. To shed some light on how this is done , I will give an example for newly designed offshore structures.

Offshore structures are typically designed to prescribed maximum probability of failure under a a certain type of loading (e.g:for instance wave loading) . This prescribed propability is determined by the operator considering the facility importance to the bussiness and the consequences of its failures on both the environment and personnel.

To clarify things better, really important structures are designed so that their probaility of failure against wave loading is 10-4 this means that these structures shall be designed to withstand wave loading from a storm that can happen once in every 10, 000 years (return period).

To verify that newly designed structures do meet the prescribed probability of failure, pushover analyses are performed in which the loading of the 10, 000 years storm is applied on to the structures and is stepped up by multiplication till the structure fail. A structure that can only withstand the loading of the 10, 000 years strom is said to have a probability of failure of 10-4. The probability of failure of a structure that fails on  higher loading is derived by determining the return period of the storm that can apply such loading on the structure. For example if the structure fails on a load level that can only be caused by the 50, 000 years return period storm, then the probability of failure of that structure under wave loading is 1/50,000 = 2x10-5 .

Babawale Onagbola's picture

I think this is a very interesting topic. I want to attempt to shed some more light on finite element analysis. Finite element analysis, in the context in which it is being discussed on this page (as regards failure of valves) is concerned with modelling materials at the design stage or after manufacture, such that these materials can be put through different stress profiles and the results analyzed. It is a cumbersome and expensive yet effective method employed usually at the design stage of materials so as to inform adjustments, tweaks in the design to enable manufacturers meet the client's design requirement. This analysis is not limited to design stage only, it is also used in modifying existing materials or components that have suffered structural failures or are required to be operated in new and different conditions or under new stress/load profiles. Basically, this method involves identification of the material involved and the different loads and stresses expected to affect the material, geometrical division of the material into sections called nodes, which eventually form what is called a mesh. Nodes are then assigned different stress levels and computer programmes calculate and inform on the expected failure or otherwise of the material. From the different stages of finite element analysis, manufacturers and material designers can both desgin components to the specifications required and provide insight to the probability of failure of materials/components UNDER DIFFERENT LOADING CONDITION.

Babawale Onagbola's picture

Now to answer lucas's questions about the failure of a valve in a particular company and the reliability of the same valve in another company. While i believe that lucas did not give adequate information concerning the circumstances under which both valves were operating, here is my opinion. OEM's apart from designing and manufacturing components to the design specifications of the client, also advise the client on operating conditions of components. They are able to do this based on the results of the finite element analysis carried out in the design stage. OEM's possess a repository of the different failure modes of components under different stress profiles and loadings and can therefore accurately predict the reliability of a component and the optimum operating conditions for that component. Now to the valves in question. You would agree with me that two identical valves with the same failure probabiities would behave differently when one is used in a high pressure subsea system with crude, gas and sand-water mixture flowing through, and another used in a process plant with hazardous chemical mixtures that exponentially speed up corrosion of the materials. Therefore, while modelling techniques like finiite element analysis miht not be 100% accurate all the time, they give the best insight into behaviour of materials in the "live environment" largely based on analysis of the effect of different loads on materials, BUT this would only hold if the materials in question are used within operating conditions they are intended for.

oseghale lucas okohue's picture

thank Thank you wale.....I gained alot from that.

Hi everyone! Can we discuss classical reliability. With regrads to what it entails.
Examples and solutions are welcomed as well. Let us use this medium to
learn more on reliability calculations.

thanks

oseghale lucas okohue's picture

Analysing mathematical concept of reliability of material will go along way
in assessing its tendency for failure.

Since RT(t) = 1 - FT(t).

where RT(t) = Reliability of material at time T(t)

FT(t) = Failure f material at time T(t)

Can we analyse on this ......if possible do some mathematicas and show
graphs to show forth our arguments. We could do some proving as well. Lets learn and have some fun.

Hanifah N. Lubega's picture

 

Wow, I like the response from Babawale. I have enjoyed the discussion on this thread, very impressive. Some of you seemed to be FOR the SIR and FAR while others were AGAINST the methodology. Lucus your idea of tackling the reliability concept is not bad but just before i get to that, i feel the need to give my lay-man view on this topic. We seem to have concentrated so much on system failure and reliability in the discussion thread rather than fatality and Injury rates. I mean a system could fail, but maynot necessarilly lead to fatalities or injuries.

First of all, the FAR and SIR give certain constants (108 and 1x108  hours of exposure respectively) that seem to have been derived from a certain scenario or assumption. How can i prove the accuracy of this method if i have values derived from assumptions of having 1000workers exposed to an activity when my company has less unless we assume that the probability is the same for all people no matter the type/where you are working within the system or better still can the formulae change if my employees work for less/more that 8hours a day?

I agree with one of the respondents who said that they are performance measures because they only use data that has been availed over that particular period of time, but the main question on my mind is that since they are risk measures, how can they be used to predict future senarios? And if for the past three years a company has not recorded any accidents or serious injuries, would it have a direct implication on the safety or reliability  perfomance of the system? 

 

 

 Reference

Safety and Risk Measures Lec notes 

 

Hanifah 

Oluwatosin A. Oyebade's picture

I’d like you to note the following points in response to your questions;

Fatal Accident Rate (FAR) is a method for measuring the quantity of risk that is present from hazards that have resulted in a minimum of 1 fatality. And as you clearly stated it is expressed in the number of fatalities that occur within a specified group of people per 108 hours of contact with the activity.

Mathematically, we can therefore say that FAR = (total fatalities/total man hours) * 10^8.

SIR on the other hand is a method for measuring inherent risks due to hazards that have experienced actual failure events, and the ensuing consequences that result in serious injuries. It is also expressed in number of serious injuries within a specified group of people per 108 hours of contact.

Mathematically given as (total serious injuries/total number of hours)*10^8.

From these two mathematical representations, it is possible to detect the FAR and SIR for a particular year, or sum up results from specified years to deduce both terms over longer periods of time. 

In conclusion my view on the FAR and SIR criticisms is due to the 3 major reasons even though people tend to ignore their usefulness in taking records of changes in accident and fatality trends;

They do not reflect risks associated with hazards which were not triggered by failure events (i.e.: fatalities or serious accidents).

Heterogeneity of data; which is collected across diverse groups that are exposed to varying tasks.

-Non-frequency of event occurrence; serious accidents and fatalities might not occur or be recorded often enough for a suitable trend to be deduced.

Oluwatosin Oyebade

oseghale lucas okohue's picture

Mohammed, Katrin, Ahmed, babawale...Thank you so much for your various contributions to finite element method  of assessing probability.Now, having understood that failure rate measures in terms of its proactive identification and assessment is a very important aspect that cuts across all aspects of the organization and focus more on goal oriented approach to management. From our discussion so far on this blog and the lecture series taught by Dr Tan on RELIABILITY THEORY.If estimated prior data to failure can be gotten from company performance history, government and commercial rate failure data and true testing of this material to failure. Of which testing of this material to failure and computing its extrapolated  probability of failure through finite elements methods during is life time is seen to be most accurate but expensive and time consuming.

Let's go technical..can we discuss Mean Time To Failure (MTTF) of a  system. Let's say a new wellhead. Explaining clearly how this can be computed. Suggestions are welcomed in this discussion.

Etienne Gunter's picture

There have been quite a few posts regarding failure rates, the
usefulness (or not) in using FAR & SIR as predictive tools and
lately we drifted towards MTTF. I think some of the participants are
misinterpreting the function of FAR & SIR.


My view is that the FAR & SIR are mere representations of a company’s (past) safety
record. It is a performance evaluation tool and not a predictive tool. 
A fatality or injury can be due to the result of many reasons –
component/system failure, operator intervention, procedure etc. Each
incident will (and should) most definitely be investigated and the route
cause determined, with recommendations made to remedy.


FAR & SIR is not directly related to the MTTF of a system. MTTF is
applicable when the FMECA or safety analysis of a system is done
during the design phase, prior to implementation.


The FAR and
SIR should be the START of the evaluation of where and how you want to
improve your safety record. It is also used to determine if it is
financially worthwhile to make any investments in improvements (we had a tutorial question regarding that). The disadvantage of FAR
& SIR, might be that it is too generic and does not address
individual incidents. Which brings me to my question:

Is there something like an acceptable FAR or SIR? When do you stop making improvements, given the cost implications? ALARP?

Mark Haley's picture

We are all aware that SIR and FAR are reactive statistics, and whilst they have their uses they are just the tip of the Iceberg!
I agree with Etienne in that they are just a single tool (and not a very good one at that) to measure past safety records. The only place that SIR and FAR can be truly representative is when enormous amounts of data are used, i.e. on an industry or very large corporation scale, and even then it is only reactive information.
The picture above is something I use in the Aviation industry to try and encourage people to report even the most minor of incidents. With open and honest reporting and even the ability to confidentially report incidents you can start to uncover some of the ice below the water and become more proactive in building safety measures/procedures.

As mentioned by Mark Nicol earlier you need to start with a HAZOP and HAZID and then monitor your systems with reports of minor incidents or suggestions from workers on how to improve the current system. The challenge lies in persuading workers that they need to report these incidents and that a ‘Just Culture' exists within the organisation.

If good culture of open reporting of all incidents is in place it is the possible to conduct a comparison between minor incidents and SIR/FAR. With this information a more representative measure of safety procedures can be carried out.

Mark Haley

William J. Wilson's picture

Hi Mark, nice picture – think I might have seen that one before!

I fully agree with you about the need for an open and honest “just culture” for effectively increasing safety systems.  Added to that: increasing the amount of minor accidents and near misses recorded would give a greater overview of an organisation’s safety performance.  However, if people believe that FAR/SIR are poor measures at an organisational level then adding other measures similar to  MIR (minor accident Rate) or NMR (Near Miss Rate) would only add to the many KPIs already floating about but would these add value? 

Again additional recording of minor injuries or near misses would only provide performance indicators over a fixed period of time in the past.  My key point here is that all the information gathered is still “old” data and all these KPIs are used to make assumptions about the future safety needs.  Measures of safety performance are always retrospective but they can be a useful tool to assess future risk.  I believe organisations should collect this data but not present safety KPIs as success or failures of departments but rather use this data in a similar manner to discount factor tables or other reference data tables when carrying out risk assessment.

William Wilson
MSc Subsea Engineering (DL)

Ber_Mar's picture

I would like to state that altough these measurments are considered in some manner poor for the experts, they actually are very usefull when speaking to an audience that does not relate to the HSE enviroment, also lower class workers. The concept of average lifetime work is in fact the beauty of the measure. As for the AFR i believe it to be usefull only when 0 or very low numbers are achieved and achieavable, this means to keep a clear goal that security and human life lies above the rest and with these indicators it is not possible to mask number, it is either 0,1,2 or many. By not considering the ammount of workers one might say ah but they have too many workers... but on the other hand too many workers means the ability to deal with them all, and even probablisticly could be a bigger chance of bigger numbers, but also should be a bigger chance of more atention and HSE.

Yaw Akyampon Boakye-Ansah's picture

 

According
to EG50S1 and EG501D, FAR and SIR are tools that give account of the past
safety records of a company. FAR is, according to this source, a measure of the
risk present from hazards that have experienced actual failure events that
resulted in at least one fatality. It occurs in a defined group of people per
10^8 working hours. Likewise, SIR measures risk from hazards that have had
serious failure events and undesired human consequences by way of serious
injuries.

Inasmuch as this measures
accurately past events, they only measure events that have occurred. As
reflected as their shortcomings, they only give indication of measures failure
events. Thus, if there is a major risk whose event of failure has not yet occurred
these measuring techniques will not reflect it.

 Accordingly, these are
retrospective tools and they cannot accurately predict the future. In the same
line, because they do not take into account the reason for the event thereby
listing only their occurrences, they cannot be described as measuring the
circumstances leading to those events.

 Thus, although these
tools accurately measure past failure vents, they do not really give a good
account of possible failure events and their occurrences. They also do not help
to address the reason of occurrence of those events or the circumstances
leading to them thus not helping to measure human failure or machine/device
failures.

 

 

Oluwatadegbe Adesunloye Oyolola's picture

(NOTE: SIF means SERIOUS INJURIES AND FATALITY)

Most troubling of course is that relying on an over-simplistic view of injury causation limits the ability of an organization to distinguish those exposures that represent the greatest threat to employee life.

Excellence in safety is directly related to how effectively the organization controls exposure to hazards in the working interface, the configuration that defines the interaction of the worker with technology. But exposures come in a variety of ways; they can be a condition, decision, behaviour, activity, cultural standard, process, or system (or lack thereof). Exposures also vary in the level of risk they pose.

The relative infrequency of fatalities and other serious events can give them an appearance of being random, of being beyond any reasonable degree of anticipation and prevention.
The lessons of prominent incidents, such as the Space Shuttle Columbia, Oxy’s
Piper Alpha, Esso Longford, BP Texas City, as well as lessons from single fatality events tell us otherwise. The vast majority of these events result from high energy potential exposures that are identifiable, measurable, and manageable.

My work studying these and other events in organizations around the world point to several significant conclusions:

1.     All minor injuries are not the same: a
sub-set of low severity injuries are precursors to serious injuries and
fatalities.

2.    Injuries of differing severity are
associated with differing situations and types of activity.

3.    Reducing serious injuries requires a
different strategy than reducing minor injuries.

Reference:

www.fars.nhtsa.dot.gov/

www.ntnu.no/ross/srt/slides/basic-risk.pdf

www.maib.gov.uk/publications/safety_studies.cfm

www.hse.gov.uk/statistics/european/european-comparisons.pdf

 

Oluwatadegbe A.O

MSc Oil and Gas Engineering

Soseleye F. Ideriah's picture

 

Much has been said and this blog makes a very good read. I would like to contribute by adding that even though FAR and SIR represent events in the past, their contribution to overall risk management cannot be discounted and they must be used in combination with other proactive risk management techniques and measures.

If we did not have the FAR and SIR, how do we analyse the safety performance of different companies and benchmark them against industry standards? How do we measure the performance of new mitigating strategies and identify areas for improvement? In agreement with earlier posts, it is true that safety models can be analysed in the design phase and the safest options can be identified without fatalities or serious injury. Real life performance however can only be measured through observing these parameters (FAR and SIR) over an extended period of time.

It is important to adopt a proactive approach to risk management (as Lucas has identified) and this should be the goal of any organisation. We should however ALWAYS remember that even though perfection is the target, it is hardly ever achieved. Organisations must learn from their mistakes!!! This results in the need for a response to past events.

 

c.ejimuda's picture

In my opinion, FAR and SIR are used by organisations to determine their key performance indicators (KPIs). They cannot be used as proactive tools for measuring the risks exposed by any organisation.

Some key challenges I discovered about these means of measuring fatality are as follows:

Organisational size: Big organisations are prone to have numerous fatalities especially if the nature of their operations involves a lot of risks. For example, Transocean Inc. is one of the biggest drilling contractors in the world. If an incident were to occur in the offshore industry, the probability that it will be on a Transocean installation is high. This will always affect their FAR and SIR. Even if they have a good work practice and competent employees, there is a high chance of a fatality occurring on any of their fleet due to the size of the company. In summary, the bigger the company or its employees, the more likely an exposure to fatality can occur.

Hours of Exposure: FAR and SIR are centred more on the number of hours an individual is exposed to risk. However, different companies have different shift pattern or shift rotations. For example, some company have 8 hours allocated for every shift while others have 12 hours allocated for their shifts. As a result, FAR and SIR is not going to be a proactive way to measure companies and their performances since they have different lengths of exposure or work rotation.

In my opinion, FAR and SIR measurements of any organisation are not enough to show how the organisation performs in term of handling or managing risks. However, with extensive risk assessments, best work practices, trainings, monitoring, periodic checks, and maintenances, a smooth and accident/incident free operation can be achieved.

Reference:

Oil and Gas UK. (2012) Health and Safety Report 2012 [Online]. Available at : http://www.oilandgasuk.co.uk/cmsfiles/modules/publications/pdfs/HS074.pdf [Accessed 20 November 2012]

 

 

Chukwumaijem M Ejimuda

MSC Safety and Reliability Engineering.

Leziga Bakor's picture

FAR and SIR may be considered poor measures of fatalities from an organisation point of view when considering the year in which fatalities occur. For example an organisation might have a good safety record for several years without any serious injury or fatality and in one year they encounter a fatal accident that results in large fatality. When the FAR and SIR are then calculated, it spreads the fatality amongst other years where there were no fatalities.
Also FAR and SIR also does not take into account all the injuries that occur. They use only the fatalities and serious injuries that occur. In reality there may be a certain hazard that has the potential to cause serious damage and when it results in an accident, it does not lead to serious injuries and death. SIR and FAR will not take this into account as there is no serious injury and death. The organisation needs to know all the risks the hazards within its environment whether it has led to serious injury or death.
SIR and FAR are more reactive than proactive, they report only past accidents and do not provide insight to the possibility of hazardous event occurring in the future for the organisation to work towards preventing.

Andrew Strachan's picture

As an organisation it useful to compare yourself to other companies of similar size and industry. This may be in financial terms but also in terms of health and safety too. In the energy sector there are numerous methods of generating energy (solar, wind etc...), a common denominator between all of these widely varying companies is power. A way of capturing the scale of the Energy operation in the statistical analysis is to use fatalities per GWh (Gigawatt hour) rather than fatalities per hr worked.

This brings a new slant to the cost-benefit approach since a company with a low number of workers but high output would be allowed higher acceptable risks.

Subscribe to Comments for "Topic 11: Why is the FAR and SIR measures of fatality considered poor measures from the organisation perpective."

Recent comments

More comments

Syndicate

Subscribe to Syndicate
Error | iMechanica

Error

The website encountered an unexpected error. Please try again later.