NHS Greater Glasgow and Clyde, NHS Lothian and AI evaluation company Aival have begun testing the technical performance of AI tools as part of a £1 million project looking at how well AI integrates with existing clinical systems and workflows.
Funded by Innovate UK, the project aims to assess the safety and effectiveness of AI technology, creating a validation framework that will support assessments of these tools prior to procurement and help develop “less invasive and more cost-effective options”. This involves looking at AI systems used for diagnosing head trauma and lung cancer, with a focus on “improving care for patients and supporting NHS staff”.
While InnoScot Health and NHS Scotland recognise the many benefits of AI integration, they have also noted that “fast deployment in response to growing demand should not outweigh the need for rigorous testing”. As such, Aival will be independently assessing scalable solutions that can help with validating clinical AI tools, with plans in place to “leverage anonymised patient data” and provide ongoing monitoring.
Commenting on the importance of the project, the head of innovation at InnoScot Health, Robert Rea, said: “This is vital work in order to lay the foundations for a safe, sustainable AI future across NHS Scotland. Everyone recognises just how transformative AI could be for Scottish healthcare, but it simply cannot be deployed or accelerated lightly, with both short and long-term efficacy needing to be evaluated. The NHS cannot run before it walks.”
This aligns with the recent publication of Scotland’s Public Service Reform Strategy, which came out in June and outlines three key commitments: to be preventative, to better join up and to be efficient. Within the strategy, public service leaders will be relied upon to drive lasting change by addressing root causes and providing support early, with a total of 18 different workstreams covering leadership, cultural change, data sharing, digital public services, digital skills and intelligent automation.
AI assessment and regulation within healthcare
In a recent interview with Max Gattlin, commercial director at X-on Health, we spoke about the use of AI technology in healthcare, discussing its rapid emergence, value and whether patients are likely to be receptive to its implementation. We also explored current guidelines in place, with Max identifying several key areas to consider: “data governance, compliance and clinical safety”.
Somerset NHS Foundation Trust recently shared a series of communications to explain to patients how the trust is using technologies such as AI, ambient voice, virtual nursing, and generative AI in order to improve transparency. The communications explain how AI will be used in patient care settings and reassure them that they will be told beforehand, given the option to opt out.
Last month, the UK government published a series of human-centred frameworks alongside a practical toolkit for the safe implementation of generative AI, featuring nine tips for leaders that are “critical to success” and three phases of adopt, sustain and optimise. The accompanying hidden risks toolkit for AI is designed to support the assessment of barriers to safe adoption, pre-empt risks from scaling, aid in the design of effective training and help organisations with ongoing monitoring.
Wellness360 by Dr. Garg delivers the latest health news and wellness updates—curated from trusted global sources. We simplify medical research, trends, and breakthroughs so you can stay informed without the overwhelm. No clinics, no appointments—just reliable, doctor-reviewed health insights to guide your wellness journey