Artificial intelligence (AI) and related technology is becoming increasingly prominent in healthcare, and with that comes the need to efficiently regulate and evaluate to ensure that all innovations implemented and adopted within the NHS are safe, effective and deliver value for money.

The importance of regulation was a focus at a recent conference, Intelligent Health UK, and will be analysed in more detail, along with the formation of the AI and Digital Regulations Service, the challenges of regulation and hopes for the future.

Clarifying the regulatory framework and embedding training of the health workforce are key to accelerating the use of AI in the UK health system, speakers told a keynote session at Digital Health Rewired 2023 in March.

Adopters also “lack the clarity and direction to confidently deploy” AI technologies, according to Clíodhna Ní Ghuidhir, principal scientific advisor for AI at the National Institute for Health and Care Excellence (NICE), who spoke to Digital Health News.

This can cause them to miss important steps or focus their time and resources in the wrong areas, she said, adding that this makes it difficult for them to navigate the path to market.

“Regulation and evidence generation standards are essential in maintaining trust between tech developers and the health and social care professionals using these technologies.

“They also create a level playing field so that developers must all meet the relevant standards on safety, effectiveness and fairness.”

Hatim Abdulhussein, national clinical lead for the AI and digital workforce at NHS England and medical director at Kent Surrey Sussex Academic Health Science Network, told Digital Health News that he believes regulation is crucial, particularly in developing healthcare workers’ confidence in AI.

“There is no doubt that regulation is core to the adoption of these technologies amongst healthcare professionals and our work to understand and develop healthcare worker confidence in AI is clear on this,” he said.

“We want to enable the use of safe, effective, and trusted technology, and therefore we must have clear standards and guardrails in place.”

The AI and Digital Regulations Service

Formerly known as the multi-agency advisory service (MAAS), the AI and Digital Regulations Service is a cross-regulatory advisory service supporting developers and adopters of AI and digital technologies.

The service provides guidance across the regulatory, evaluation and data governance pathways, bringing benefits to the entire health and social care landscape through safe and more effective use of technology.

Abdulhussein explained that NHS England launched the service earlier in the year with the aim of “bringing together the key domestic regulators to be a central source of regulatory and best practice guidance”.

The service, which was introduced at the Intelligent Health Session, is a collaboration between four organisations – The National Institute for Health and Care Excellence (NICE); The Medicines and Healthcare products Regulatory Agency (MHRA); The Health Research Authority (HRA) and The Care Quality Commission (CQC) – allowing it to provide comprehensive guidance at each stage of the regulatory, evidence generation and market access pathway.

Service for NHS and social care adopters now live

Ní Ghuidhir confirmed that the timely service “will be quickly updated to account for new developments”. The service for digital product developers has been running in beta phase since the autumn.

She also revealed that the service for NHS and social care adopters is now live as of today (12 June). The combined web-based service aims to accelerate the development and deployment of safe, innovative, value-adding technologies in health and social care. 

NHS clinicians have welcomed the new service, including Dr Suraj Menon, consultant radiologist and clinical director at Dartford and Gravesham NHS Trust.

“As an end user of these technologies, the AI and Digital Regulations Service gives me confidence that the products I use meet high safety standards, are effective and provide value for money,” he said.

Haris Shuaib, consultant physicist and head of clinical scientific computing at Guy’s and St Thomas’ Hospital NHS Foundation Trust, called the service a valuable resource, adding that his role of facilitating AI adpotion at the trust has been a challenge and risky responsibility in the past.

“This new service provides a common and coherent adoption pathway. It will help people like me to implement such technologies safely and at speed for the betterment of patients,” he explained.

Speed of innovation, level of evidence key challenges

Despite the essential need to properly regulate and evaluate new innovations before they enter the healthcare market, the speed with which technologies are being developed and implemented is a key challenge, Abdulhussein told Digital Health News.

“The other challenge is defining what the most appropriate level of evidence should be, and how efficiently and appropriately can the right threshold be reached. The NICE Evidence Standards Framework is a helpful starting point.”

Ní Ghuidhir believes that some of the key challenges relate to the wide range of regulations and their complexity.

She explained that “it can be difficult for innovators to understand the breadth of regulations that apply to their product, what actions they need to take to ensure compliance and when in the product’s life cycle each regulation should be considered.”

“It’s important to plan ahead and get this right,” she added.

Regulatory hopes and aspirations

As AI continues to boom and technology continues to be implemented at a rapid rate, having a regulatory landscape within health and care that is reviewed on a regular basis will become increasingly significant.

Abdulhussein believes that AI used safely and ethically can help improve care delivery, the patient experience, and system intelligence.

“However, regularly reviewing the landscape, collaborating with cross industry partners both domestically and internationally to understand the right approach to safe ethical AI, and building the right infrastructure, skills and training will allow us to have a pro innovation regulatory approach to AI so we can achieve its potential to transform health and care for our citizens,” he added.

In March, the Secretary of State for Science, Innovation and Technology presented to Parliament ‘A pro-innovation approach to AI regulation’, a document focused on changing the AI regulation landscape in a more general sense, but much of it can be applied to health technology.

The document states: “Government intervention is needed to improve the regulatory landscape. We intend to leverage and build on existing regimes, maximising the benefits of what we already have, while intervening in a proportionate way to address regulatory uncertainty and gaps.

It promises that its regulatory framework is designed to be “adaptable and future-proof”.

Ní Ghuidhir has a very optimistic view of the future in terms AI technology, its place within the care pathway and how it is regulated, adding that part of her job is helping companies develop evidence generation strategies appropriate for their use cases.

“It often inspires me to learn about mechanisms they’ve developed for safety assurance and improving the models, but it also gives insights into how young this field is because it can still be quite tricky to figure out exactly where an AI tech should intervene in the care pathway to add most value,” she said, adding that within a decade developers were likely to have a better sense of the regulatory parameters.

“The fact the regulations and evaluation guidance will evolve is one of the main reasons we built a living websiteso we can quickly update it as new regulations come in.