AI in healthcare: a legal and ethical balancing act

Avatar photo

By Marie Le Frapper on

Paralegal Marie Le Frapper evaluates the different regulatory approaches to the use of artificial intelligence in healthcare to see which strikes the best balance between providing users adequate protection and encouraging growth and investment

Artificial intelligence (AI) is undoubtedly a hot topic. It is also one of the most high-profile and controversial areas of technological development. Whilst we are still far from robots taking over the earth, its use has become much more common thanks to the improvement of analytical techniques and the increased availability of data.

The healthcare sector is one in which the benefits of AI are undeniable as this technology can do what humanity can do already in a more efficient way and much more, such as finding links in genetic codes. However, its use raises several legal and ethical questions that we need to address. This is why governments and international organisations are now focusing on creating an AI-friendly regulatory framework.

As with any new technological development, AI raises many questions that governments and populations need to grapple with. Work on the subject is ongoing at all levels including in the UK and the European Union. For many of us, AI and algorithms are a very opaque science which we are told to believe is used for the benefit of us all. However, there is scope for the exact opposite to happen as well. Therefore, in 2020, the High-Level Expert Group on AI appointed by the European Commission set out seven requirements regarded as essential to the ethical use of AI with a focus on transparency, safety, fairness and accountability.

Data is what fuels AI. Without it, machines would not be able to learn how to ‘think’. This is why the protection of patients’ medical data is paramount and is now an industry and government priority around the world. However, the health sector, in particular, is at high risk of cyber threats because of the sensitive nature of patients’ medical data.

Since this data is at the forefront of scientific and technological innovation, with the life sciences sector being worth several billion, it is also a very attractive target to cybercriminals. For example, Ireland’s Department of Health and Health Service Executive were earlier this year the target of a cyberattack. It was a direct assault on critical infrastructure and resulted in cancellations of non-emergency procedures and delays in treatments. Similarly, in 2017, the NHS was disrupted by the ‘WannaCry’ ransomware. Ensuring that both public and private healthcare providers have the tools to protect patients’ data will increase confidence and lead to many more people being willing to share their medical information to organisations creating AI so that there is a significant enough database available for machine learning.

The framework surrounding data protection is ever-changing. The Court of Justice of the European Union decided last year in the Schrems II case to invalidate the Privacy Shield decision granting adequacy to the US. This ruling has a significant impact on cross-Atlantic trade and data sharing whilst casting a shadow over the UK as the end of the transition period approached. In the UK, the Supreme Court is due to rule on the possibility of bringing opt-out class actions in data breaches cases in the Lloyd v Google case. As healthcare providers and bodies are targets of choice, greater protection will be required because of the risk of facing potentially expensive claims.

Greater public confidence also leads to the supply of information coming from more diverse populations. We already know that some diseases do not manifest themselves in the same way depending on the ethnic background of the patient. A very simple example can be seen in an AI tool that was created to detect cancerous moles. In its early stages of development, the AI will have been trained on a database mostly composed of white skin images, meaning it will be less likely to find such cancerous patterns on darker skins.

Want to write for the Legal Cheek Journal?

Find out more

Another issue that arises out of the use of AI is that of discrimination. The Dutch government used an algorithm named SyRI to detect possible social welfare fraud based on criteria such as the amount of running water used by a household. However, freedom of information requests revealed that SyRI was predominantly used in low-income neighbourhoods, exacerbating biases. Eventually, the Dutch court of The Hague ruled SyRI violated article 8 of the European Convention on Human Rights protecting the right to respect for private and family life. The benefits created by AI should not be obscured by biased machine learning that can be corrected by proper human oversight.

As AI is being democratised, and the above challenges become more obvious, governments are focusing on creating a framework striking a balance between building an environment that not only is welcoming for businesses in this area such as life sciences organisations and pharmaceutical companies but that also offers sufficient protection for our data.

The cash injections and investments made during the pandemic in the life sciences sector are due to remain as the Prime Minister seeks to strengthen the role of the UK as a leading country in this sector. Since leaving the European Union, the UK government has announced plan to invest £14.9 billion in the 2021/2022 year, rising to £22 billion by 2025 on research and development in the life sciences industry with a focus on technology.

In a draft policy paper released on 22 June 2021 entitled ‘Data saves lives: reshaping health and social care with data’, the Department for Health and Social Care set out its plan for the future in a moment in time where our health data is key to reopening society. Chapters 5, 6 and 7 of this policy paper focus on empowering researchers with the data they need to develop life-saving treatments, developing the right technical infrastructure, and helping developers and innovators to improve health and care with a specific focus on encouraging AI innovations as well as creating a clear and understandable AI regulatory framework. For example, amendments were made to the government guidance on AI procurement encouraging NHS organisations to become stronger buyers and a commitment was taken to develop unified standards for the efficacy and safety testing of AI solutions closely working with the Medicine and Healthcare products Regulatory Agency and the National Institute for Health and Care Excellence by 2023.

Another initiative is the AI in Health and Care Awards. During its first round, there were 42 winners including companies such as Kheiron Medical Technologies for MIA “Mammography Intelligent Assessment”. MIA is a deep learning software that was developed to solve challenges in the NHS Breast Screening Programme such as reducing missed diagnoses and tackling delays that put women’s lives at risk. The use of such software has a significant impact on public health, saving lives thanks to early diagnosis and reducing the cost of treatment that the NHS offers. Indeed, researches have shown that about 20% of biopsies are performed unnecessarily.

Although the UK is no longer bound by EU law, developments in this sector happening on the continent need to be kept in sight. In April 2021, the European Commission published draft regulation on the harmonisation of the rules on AI. Although it takes a risk-based approach to AI, it is worth noting that the draft regulation prohibits the use of AI for social scoring by public authorities and real-time facial recognition (as in the 2020 Bridges v South Wales Police case). Maximising resources and coordinating investments is also a critical component of the European Commission’s strategy. Under the Digital Europe and Horizon Europe programmes, the Commission also intends to invest €1 billion per year in AI.

Furthermore, now that the UK has been granted adequacy, which means the EU recognises the level of protection afforded to personal data in the UK is comparable to that afforded by EU legislation, data can continue to flow between the two parties and significant divergences are unlikely to arise in the near future. Similar to the Covax initiatives, greater collaborations and the development of AI taught using databases composed of both EU and UK data would not be surprising.

Governments and stakeholders are now looking at AI as the future of healthcare. Although its use implies many ethical questions, the benefits are likely to outweigh the risks provided the regulatory framework surrounding it offers both flexibility for innovators and stringent requirements protecting our medical data. The two approaches taken by the UK and the EU seem to focus on the same relatively non-contentious criteria. However, it seems that the UK government is more willing to invest in the sector, surfing on the reputation of the country in genome sequencing. The upcoming developments in that field should be kept in sight for anyone with an interest in new technology, healthcare and data protection as they promise exciting discussions.

Marie Le Frapper is a paralegal working for the Government Legal Department. She graduated with a degree in law and French law from the University of London and now hopes to secure a training contract to qualify as a solicitor.

Want to write for the Legal Cheek Journal?

Find out more

Please bear in mind that the authors of many Legal Cheek Journal pieces are at the beginning of their career. We'd be grateful if you could keep your comments constructive.

Join the conversation

Related Stories

Should NHS staff tackling COVID-19 be immune from negligence claims?

UCL law student Yanusika Srithar argues against blanket immunity

Jan 22 2021 10:10am
1

COVID-19: Could PPE shortages amount to corporate manslaughter?

A ‘herculean effort’ has been exerted to increase the availability of protective gear for frontline NHS staff -- but should more have been done sooner, and if so, who is accountable?

May 7 2020 12:51pm

Could COVID-19 spit attacks amount to constructive manslaughter?

Birmingham University law student Anna Hayes considers the cases of Trevor Belle and Belly Mujinga

Jun 15 2020 1:23pm