The Role of AI in Healthcare: Risk or Reward?

All NHS trusts in England will be offered a new artificial intelligence (AI) that will help cut down radiotherapy waiting times for cancer patients. The new AI will guide doctors in calculating things like where to direct therapeutic radiation beams which kill cancerous cells. Researchers say that this AI can work two and a half times quicker than doctors are able to when they are scanning and contouring patients’ bones and organs. What usually takes between 25 minutes and 2 hours can be rapidly sped up saving resources and giving doctors more time to focus on more patients. 
Unlike many other countries, which are somewhat wary of AI, the UK has taken a pro-innovation approach to regulating it. Efforts have been placed on promoting development and growth in the tech and AI sectors. The government has therefore been investing in projects like this to support the NHS and health sector, although this is the first medical imaging device program that has been introduced to date.  
The benefit of using AI in the healthcare field is that it can spot anomalies and warning signs much better than humans can at a more rapid speed. AI can also monitor health conditions over a long period of time by tracking patients’ symptoms and health data and providing recommended treatment solutions. For instance, the Aberdeen Royal Infirmary is also currently trialling a new kind of AI to assist radiologists in reviewing thousands of mammograms per year which will help identify early signs of breast cancer.
These are clear examples of how AI can be used to expedite our research and improvement in the health sector, finding greater solutions to identify and cure diseases, but should we be cautious of this rapid growth?

What Are the Risks of AI in Healthcare?

The biggest concern with AI is that it is not properly regulated yet. Countries around the world are trying to develop ways of regulating AI but the reality is that it is tricky because we don’t fully understand this technology and it is developing so rapidly. The UK has announced that regulation is not something it is rushing to do and the priority is to allow AI to help innovate and transform businesses as opposed to restricting growth through regulation. But if the priority is to innovate first and then regulate after, will it not be too late for our data? The only issue with this is that when AI is used for sensitive issues such as healthcare, it can present some risks. 
The most obvious risk of using AI in healthcare or health tech is exposure to data breaches. As we have seen, AI is now being used to diagnose cancer, discover diseases and inconsistencies through X-rays and scans and in analysing patient health data. In many UK hospitals, doctors and dentist surgeries, paper notes are still being used, so as we rely more on AI to track and monitor people’s health, the more highly sensitive information is being stored and processed digitally. This naturally leaves us vulnerable to cyber-attacks and data leaks that could put our personal health data in the wrong hands. 
AI is a fast-developing technology that can easily get out of hand when we can’t keep up with its growth. We have seen this with ChatGPT which has transformed so quickly and become a data risk to many users that some countries have had to ban it and others have been forced to come up with regulation plans. 
Data breaches could lead to an influx of identity theft cases where people steal personal data to impersonate others and gain access to healthcare treatment, prescription medicines, health equipment or health insurance. Stolen health data could also leave people vulnerable to blackmail or ransom threats. Above all, such attacks could cause patients more distress which, depending on their medical situation, could cause further health issues or exacerbate or aggravate existing problems. This could leave many healthcare or health tech companies open to compensation claims

How Can Healthcare and Healthtech Organisations Protect Patients From AI Risks?

The reality is that AI is transforming our world, with ChatGPT and the NHS developing software to detect early signs of cancer, there’s no doubt that AI is providing us with unique opportunities to better our healthcare system. However, AI is currently poorly regulated and although France, the US and the EU are trying to implement some kind of AI regulation plans, we are far from finding the right balance between innovation and protection. 
Although there is little regulation in AI, we do know the extent of the potential risks. Therefore, there are ways in which healthcare and health tech companies can protect patients. 

Transparency 

Many AI regulation proposals such as the French CNIL action plan talk about transparency, auditing and controlling AI systems. With AI developing all the time at a rapid speed, the best thing that health tech companies can do is to keep their clients and patients aware of all the potential risks involved, how their data will be used and how it may impact them. 
It’s more important than ever to provide patients with sufficient information on how healthcare AI software works and how it has been developed. For instance, does it learn from user activity and therefore have to process and retain personal health data in order to recognise other health concerns? As such, can user data be accidentally leaked or disclosed to other patients or professionals?

Consent 

Healthcare organisations should give as much information as they can in a clear and consistent format perhaps both physically and digitally for greater accessibility. They should also give patients consent forms so that individuals can agree on how their data will be used and have the chance to opt out if they are not comfortable. It might also be a good idea to give patients the opportunity to obtain independent advice before undergoing specific treatment or checks through an AI-generated device. 

Data Protection

Since health data is very sensitive because it is unique to each individual, it should be securely guarded against hackers or cyber threats. The obvious way to handle sensitive data is by ensuring the right protections are in place such as encryptions, setting up multi-factor authentication and conducting regular stress tests and risk assessments on cybersecurity and ransomware threats.

How Can Gerrish Legal Help?

Gerrish Legal is a dynamic digital law firm. We pride ourselves on giving high-quality and expert legal advice to our valued clients. We specialise in many aspects of digital law such as GDPR, data privacy, digital and technology law, commercial law, and intellectual property. 
We give companies the support they need to successfully and confidently run their businesses whilst complying with legal regulations without the burdens of keeping up with ever-changing digital requirements.
At Gerrish Legal, we work with health tech companies and we are a proud mentor on the EIT Health platform too. 
We are here to help you, get in contact with us today for more information.
Previous
Previous

The Future of VR and Our Data Privacy

Next
Next

Ed Sheeran v Marvin Gaye: When Are Copyrights Infringed?