Image © whyframestudio

In It Together

Peter Rudd-Clarke, Legal Director, specialising in the life sciences and consumer products sectors, and Emma Kislingbury, Associate Solicitor, specialising in medical/life sciences and product liability, Reynolds Porter Chamberlain (RPC) LLP

Email: peter.rudd-clarke@rpc.co.uk; emma.kislingbury@rpc.co.uk

Issue 14

Artificial Intelligence (AI) presents clinicians with a challenge: can they harness AI to improve outcomes for patients without creating increased litigation risk? The answer coming loud and clear from the Medical AI and Robotics Conference 2020 in February was: "Yes" – but only if doctors and hospitals work with manufacturers and computer scientists to minimise the risks.

AI risks

Litigation can arise when a patient experiences a level of medical care that does not meet their expectations. Claimant lawyers focus on the party responsible for the alleged error and sue the provider of the diagnosis, treatment or care. A manufacturer may also be a target where a product is involved.`

AI complicates the picture because it brings with it some unique risks. Some of the risks debated at the conference included:

Accountability


"Automation bias" is well documented. This is where people generally (and clinicians are no exception) defer to recommendations made by AI, even in the face of their own experience and training. In a clinical setting, that creates a risk that a patient may suffer a worse outcome. It can be difficult to unpick who is responsible. The doctor, who could have used their judgement to override decisions based on AI? The manufacturer, who could have foreseen and prevented the error? The software engineer whose decisions may have led to a glitch in the programming underpinning the AI?

AI has limits

The limits of AI may not be fully understood. Too much trust can be invested in software and a blind eye turned to its flaws. The data used in machine learning or diagnosis may be low quality or incomplete, leading to poor outcomes. It is increasingly understood that AI is at risk of being programmed with hidden preferences or a demographic bias. AI may not always be the complete solution to complex problems.

Human error

Ultimately, despite the exciting potential offered by computers, the primary AI risk remains human decision making. Such decision making is present throughout the supply chain: from the design of an algorithm, through the selection of data sources and decisions over the deployment of an AI service or product, to reliance on AI in prescribing treatment. The Courts will consider the liability of doctors, hospitals or manufacturers where the level of care has been negligent, or a patient has suffered injury due to a defective medical device. Liability will turn on decisions made by people in those organisations.

Rebalancing the risks through collaboration

Early collaboration is key to mitigating the risk of litigation. Clinicians can consider the following:

Communication

The sharing of knowledge and information between developers, manufacturers, hospitals and clinicians will help to pre-empt, and potentially, avoid, some of the risks which AI creates. Manufacturers and developers should actively engage with hospitals and individuals using their products; end users who are well trained are less likely to experience issues with a product which, in turn, reduces the risk of claims.

Allocation of liability

But what about when things go wrong? How can we seek to allocate liability whilst trying to move away from a situation where all parties are fighting each other to avoid blame?

Last year the UK Government issued an updated "Code of Conduct"¹ , which sets out 10 principles for the development of "safe, ethical and effective data-driven health and care technologies". The code recommends defining the commercial strategy for a product at the outset, which should include identifying how liability is allocated between all those in the supply chain. Clear terms around who bears responsibility when things go wrong should help to avoid a situation where each component part of the chain works in isolation, concerned only with protecting its own interests. In contrast, the code endorses collaboration, transparency and accountability between all parties throughout the development process.

Guidance from regulations

Generally, defence lawyers for a healthcare provider or manufacturer will seek to rely on applicable regulations to defend their clients. In the case of a doctor, following accepted practice and protocols, supported by the appropriate regulatory body, can provide evidence of acceptable care. In the case of a manufacturer, compliance with regulations designed to ensure product safety can go a long way towards persuading a judge that a product was not defective.

Regulation which is clear and effective could also help mitigate the risk of AI litigation. Principles and guidance have been issued, and frameworks, both legal and ethical, are being discussed. But whilst some general laws will apply (including, at EU level, the General Data Protection Regulation (GDPR) and the Product Liability Directive), there is not yet a clear regulatory framework in place for the use of AI in healthcare. The pace of change in AI has been such that regulators have their work cut out in designing and enforcing a system that balances innovation with safety for the benefit ofpatients, healthcare providers and manufacturers alike.

In January, the CEO of NHSX, the team responsible for driving forward digital transformation in healthcare in the UK, met with the heads of 12 regulators and public bodies, including the MHRA, NICE and the Information Commissioner. The meeting focussed on the regulatory challenges posed by AI, and led to agreement by all that clear, innovation-friendly processes and regulations are necessary. The key points that emerged, and we hope will enhance collaboration between parties, were:


Looking ahead

The advent of new AI techniques means that multiple parties are going to have to collaborate as never before to limit the risk that AI could lead to sub-standard treatment. If they do not, the promise of AI may not be fully realised due to a combination of poor planning and avoidable financial exposure. It is early days but guidance from the UK Government and NHSX points the way forward.

References

[1] https://www.gov.uk/government/publications/code-of-conduct-for-data-driven-health-and-care-technology/initial-code-of-conduct-for-data-driven-health-and-care-technology