Noise Induced Hearing Loss

Mr Andrew Parker, Consultant ENT Surgeon, Peak Medical Practice Ltd, Clinical Sciences Block Huntbridge Hall, Matlock Green, DE4 3BX - aparker@medicolegal2000.co.uk

Without doubt the most preventable occupational health disorder is that of noise induced hearing loss (NIHL).

Exposure to excessive noise at work has long been recognised as having the potential to cause deafness and indeed with it, tinnitus.

There are essentially three ways in which noise can cause ear problems.

1. Acoustic trauma, i.e. that from an explosive type discharge or ‘blast’ traditionally associated with noise levels at or above 135 -140 dBC. In this situation the short duration high intensity noise causes its effect by significant structural disruption, not only of the cochlea, but also on occasions the vestibular part of the inner ear and indeed the middle ear, frequently characterised by a perforation of the tympanic membrane. This form of acoustic injury can be seen, for example, in employments which involve potentially explosive type discharges in the workplace such as the steel industry.

2. Steady-state or ‘ordinary’ industrial noise where the exposure is steady and constantly excessive or averagely so. This is the type of noise exposure occurs, for example, in a significant number of manufacturing processes and which will form the vast majority of compensation claims from those in allegedly noisy employment.

3. Acoustic shock. This type of acoustic injury is arises as a result of short duration sound that this is perceived as loud. There is evidence that to produce this type of disorder the noise levels need not in fact have been negligent and a significant number of claims in respect of this disorder have come from those working in the telecommunications industry, where the alleged exposure or ‘acoustic incident’ has been presented to the ear through a telephone earpiece or headset.

This short article concentrates on steady-state exposure to excessive noise, which will form the majority of the claims requiring medical expert reporting.

With any occupationally induced disorder, the best course of action an employer has is to prevent it occurring in the first place. Much has been written about reducing sound levels in the workplace, noting that modern manufacturing processes are simply less loud than in former times and secondly protecting the workforce from any such injurious noise levels, for example, by job rotation, enforced work breaks, acoustic refuges and most importantly effective ear protection. Although the effects of exposure to excessive noise on hearing have been recognised for probably around 200 years, it is only relatively recently that the situation has been dealt with by appropriate legislation. It has been recognised by the Health and Safety Executive that around 2 million employees are exposed to excessive noise in the workplace, each of which have a potential of having their hearing damaged (HMSO The control of noise at work regulations 2005: Statutory instrument 2005 (1643) Health and safety. Norwich, UK: HMSO.ISBN 0-11-072984-6). Furthermore, there has been stringent legal obligation for employers to address this matter, not only in respect of protection/minimisation of exposure to noise, but also appropriate health surveillance, i.e. sequential audiometry in the workplace.

It is recognised that a significant number of individuals can become noise deafened, at least in the early stages, without it being symptomatic. The role of occupational audiometric surveillance is therefore important in the early recognition of this and where matters can be addressed appropriately.

Those of us who work in the field of ENT (Ear, Nose and Throat medicine/surgery) and indeed audiology, will regularly see hearing impaired individuals, where the majority arises as a result of the deafness of ageing or presbycusis. In addition, any busy ENT Practitioner will recognise a population of patients who are hearing impaired as a result of middle ear disorder, for example the effects of otitis media/glue ear/chronic suppurative otitis media etc, none of which will be occupationally induced, and where middle ear issues lead to conductive rather than sensorineural or inner ear hearing loss.

Since the advent of pure tone audiometry from the 1950s onwards it became apparent that the predominant effect of  steady-state excessive noise is to produce not only mainly high frequency hearing loss, but that particularly between 3 - 6 kHz producing the so-called noise induced ‘audiometric notch’.

The diagnosis of NIHL, therefore,  not only in clinical, but in medicolegal settings is first of all recognition that there has been exposure to excessive noise in the first place on a significant and substantive basis. Secondly that there is significant hearing impairment, and the main medicolegal issue here being such that it causes a material disability. Thirdly that there is an audiometric correlate from which the diagnosis can be derived (see above) and lastly that there are no confounding factors which might lead to hearing loss, but particularly that which can mimic the effects of exposure to noise, an  example of which could be a significant head injury. 

In days gone by it wasn’t that difficult to establish significant exposure to excessive noise, because the individual under examination would have worked for many years usually in the same employment in heavy engineering, steel production or coal mining and without provision of ear protection. These kind of noise levels in heavy industrial processes and for this duration would often lead to quite striking audiometric changes, i.e. notch formation, and where the diagnosis would be obvious.

Matters became more complicated because as we have moved to more recent times, the noise levels have frequently lessened. From the late 1980s through into the 1990s effective hearing protection generally became available and individuals tended to have more employments, not necessarily involving exposure to excessive noise, which made it more difficult to provide an informed diagnosis.

One of the mainstay diagnostic schemes to assist the clinician in advising a Court as to whether or not an individual has been  noise deafened was that published by Coles et al. (Guidelines on the Diagnosis of Noise Induced Hearing Loss for Medicolegal Purposes, by Coles, Lutman & Buffin, Clinical Otolaryngology 2000:25:264-273) which has been used by most experts working in the field, although the initial acceptance did not come for  7-10 years after publication and substantively so following on from what became known as the Nottingham textile claims  (Parkes v Meridian and others, Queens Bench Division, Nottingham Registry 14/02/2007), where the usage of these Guidelines was embraced by the Courts. There is no doubt that this publication, which sets out a mainly logical and schematic aid to diagnosis, has found favour with the Courts and is used extensively to the present.

In accordance with this publication, the diagnosis of NIHL is made on the balance of probability, i.e. the legal test required on satisfaction of three requirements. For a detailed explanation of these the reader is referred to the actual publication, but these are summarised below.

Requirement R1. High frequency hearing loss, i.e. when a single measurement of hearing threshold level at 3, 4 or 6 kHz is at least 10 dB greater than at 1 or 2 kHz. It will be noted that this is a non-specific requirement and can indeed be satisfied by the simple effect of ageing. It basically advises the examining practitioner that there needs to be a hearing loss as a starting point, but which needs to be developed further.

Requirement R2 is significant noise exposure. This is presented in terms of an accumulated noise dose throughout the employee’s working life and is that presented to the ear. Noise dose is expressed as a Noise Imission Level or NIL and is properly a matter for acoustic advice from an engineer. This requirement is either 100 (99.5) dB NIL or 90 (89.5) dB NIL depending on the strength of the audiometric indicator (see below) and where NIL is the noise imission level, the total accumulated noise dose. This publication does not allow noise levels below 85 dBA LEPD (i.e. averaged over a working day) to be taken into account in the derivation of this requirement.

Requirement R3 is the audiometric formation, which should either be a notch, or if this is absent a so called ‘bulge’. Notch formation is self-explanatory but the bulge analysis is undertaken when notching is not seen. It is essentially a mathematical construct that is derived by comparing the actual measured audiometric threshold from the individual under test with that derived from a presbycusis database to take into account the effects of ageing, but adjusted for the so-called ‘misfit’ at the frequencies which form ‘anchor points’ which are usually thresholds at 1 and 8 kHz or can be 0.5, 2 and 6 kHz depending upon the audiometric formations.

If these requirements are all satisfied, then on the balance of probability the claimant derives a diagnosis of NIHL.

Whilst this publication is in regular use, it has attracted significant discussion, especially in recent times, noting that a ‘safe level’ of exposure has been considered by some to be below 80 dBA LEPD and that by others some of the criteria have been considered to be too rigid, noting however that they can be altered by the use of so-called ‘modifying factors’.

Nevertheless, other approaches have been advocated. 

In 2015-2016 the authors of what became known as the CLB 2000 paper detailed above developed the concept further to derive noise deafness quantum, this methodology subsequently becoming known as the LCB 2015/16 approach (Guidelines for quantification of noise induced hearing loss in a medicolegal context. Lutman, Coles and Buffin. Clinical Otolaryngology (2016) August issue). Essentially noise deafness quantum was derived by these authors by taking the excess hearing loss at 1, 2 and 3 kHz (the frequencies that determine disability) over and above from a bulge type analysis, but modified from CLB 2000 where it was effectively undertaken twice and where the anchor points at 1 and 8 kHz were modified according to a hypothetically derived noise deafness component at these frequencies. This method involved a
so-called ‘first’ and ‘second’ pass.

There is no doubt that CLB 2000 and indeed LCB 2015/16 clarified some muddy waters, the former informing us how to treat an audiogram where defined audiometric notching, for example, was not present and moving away from the older concept of all hearing loss in excess of that expected for age was quite simply due to noise exposure, once the diagnosis of noise deafness had been made. The concept in addition of using a more defined percentile for estimating presbyacusis, or the deafness of ageing, refined the analysis further and gave the Court a more accurate diagnosis and quantification on the balance of probability.

These publications have however attracted criticism in spite of the general acceptance both by clinicians and the Courts and most recently the work of Professor Moore et al (Guidelines for diagnosing and quantifying noise induced hearing loss, Moore BCJ, Lowe DA and Cox G, Trends in hearing 26:1-21. 2022) has tried to address any perceived shortcomings with this established methodology.

These authors have introduced some controversial concepts and their method of diagnosing NIHL is currently under scrutiny by most experts working in this field. For example, they have taken a significantly reduced  noise imission level required to at least consider a diagnosis of noise deafness, and their insistence on a defined diagnostic indicator is nowhere near as rigid as in CLB 2000. Critics of this recently proposed methodology quite reasonably have commented on the fact that in order to diagnose a disorder in clinical medicine, there has to be signs of it in the first place, and simple acceptance of a diagnosis of noise deafness on the basis that there is hearing impairment and a history of noise exposure is to be depreciated. This principle was set out way back in the 1990s by Williams (The Diagnosis of Noise Induced Hearing Loss, by Williams RG (Journal of Audiological Medicine 6:45-58: 1996), and (‘Advances in Noise Research:  Biological Effects of Noise’ (Volume 1) Edited by Prasher and Luxon, Whurr Publications, 1998).   

In addition, the Moore et al. publication advises determining presbycusis from ISO 7029  (2017) which was a database that was not in fact signed off by the United Kingdom, or indeed the United States of America, and also obviating the need to correct for the well known 6 kHz artefact as a result of the use of TDH 39 headphones. Noting that the use of these devices could sometimes erroneously produce a diagnostic indicator at 6 kHz, which could have been taken as a diagnosis of noise deafness. The Moore et al. guidelines (so-called MLC 2022) no doubt will continue to attract controversy and particularly so if their usage becomes more widespread.

In terms of what constitutes a material loss, again this has attracted controversy. Hearing impairment can be translated into disability by looking at the thresholds at 1, 2 and 3 kHz (see Assessment of hearing disability by King, Coles, Lutman and Robinson, Whurr Publications 1992), although some authorities will determine this from thresholds at 1, 2 and 4 kHz. In many instances what constitutes a significant noise deafness quantum has frequently been left to the Courts to determine. A reasonable view is any hearing loss at 1, 2 and 3 kHz, which is capable of being rounded up to 5 dB or more, i.e 4.5 dB or anything in excess. Some experts will take this is as somewhat lower, but on my understanding Courts will accept a range of say 3.5 - 5 dB, depending upon the expert’s advice that they receive. In my opinion a quantum of 3.5 dB should not be regarded as a material disability, but it is reasonable to point out that there will be a range of opinion on this.

Courts are frequently required to apportion blame for noise deafness. Increasingly this is undertaken with the assistance of an acoustic engineer, who will apportion the noise exposures accordingly but in the absence of this, it is usually taken from the proportion of noise sustained on a temporal basis in each noisy employment where neglegent exposure has taken place.

There is no medical or surgical treatment for sensorineural hearing loss, however caused, and  damages can sound in requirement for aiding. NHS aids are free at the point of delivery and this includes maintenance, upgrades and batteries, but frequently included in the schedule of loss are costings for private aiding. Over the course of a lifetime, given a device will last 5 - 7 years, this can mount up, and especially as some of the more modern devices can be obtained for between £2000 - £3000. Reasonable costings are obtained from Coffin and Tarrant v the Ford Motor Company, adjusted for inflation, which was set at mid-price levels obtained from Specsavers (although this is not an endorsement of this service, other outlets are available).

Tinnitus, or the hearing of noises for which there is no external correlate, is a common symptom seen in clinical ENT and audiological practice. It can be and indeed often is an accompaniment of NIHL.

Tinnitus is an entirely subjective sensation, which cannot be objectively verified, and so we have to rely on how the individual under examination describes it, but taken alongside what is written in the medical records.

Most experts in the UK will grade the severity of tinnitus as set out in the British Association publication (Guidelines for the grading of tinnitus severity: the results of a British Association Working Group commissioned by the Otolaryngologists,  Head and Neck Surgeons 1999, Clinical Otolaryngology 2001  26: 388-393) and this is a necessity for the Court on which it will determine the level of compensation. A claim for NIHL will attract a significantly increased level of damages if significant tinnitus is also present, and where it is considered to be noise induced.

Most experts, including myself, uphold the principle that whatever caused the hearing loss will have caused the tinnitus, providing it didn’t pre-exist the onset of noise deafness, but it isn’t always the case. For example, if tinnitus occurs only in an asymmetrically deafened ear, where the asymmetric loss is not noise induced or if it arises as a result of a new event, such as a head injury/ear infection etc. Another example is if it arises straight after a short duration loud sound, which by itself wouldn’t cause hearing loss.

Some tinnitus quite simply is physiological, i.e. it falls within normal experience and therefore is insignificant. Tinnitus of this nature and severity, in my opinion, should not be regarded as noise induced, even if there is NIHL. Similarly, significant tinnitus occurring in the presence of noise deafness, which is regarded by the Court as de minimis, i.e. not material, should imply that it similarly makes no material contribution to the tinnitus the claimant experiences.

Generally if tinnitus persists for more than two years, it is frequently considered to be permanent. There is no medical or surgical treatment for it, although tinnitus retraining in the local Audiology Department with or without the use of masking devices can be tried, but frequently it will be suppressed by the simple expedient of aiding, which may be necessary anyway for any sensorineural hearing loss however caused.

This article finishes by reminding the reader that noise deafness is preventable and indeed now that the relevant personal protective equipment is easily obtained, that employers recognise their responsibilities, NIHL claims have significantly reduced in volume over recent times and no doubt will continue to do so.