Artificial intelligence in dentistry and healthcare should provide clinicians with information, not recommendations, in order for teams to offer the most appropriate treatment.
This is according to a new white paper looking at the impact and use of AI within healthcare, including dentistry.
A collaboration between York University’s Centre for Assuring Autonomy, the MPS Foundation and the Improvement Academy hosted at the Bradford Institute for Health Research, the paper says the greatest threat to AI uptake in healthcare is the ‘off switch’. If frontline clinicians see the technology as burdensome, unfit for purpose or are wary about its impact on decision-making, their patients and their licences, they may refuse to use it.
Titled ‘Avoiding the AI ‘off-switch’: make AI work for clinicians, to unlock potential for patients’, it makes a number of recommendations for healthcare providers.
- AI tools should provide clinicians with information, not recommendations
- Revise product liability for AI tools before allowing them to make recommendations
- AI companies should provide clinicians with the training and information required to make them comfortable accepting responsibility for an AI tool’s use
- AI tools should not be considered akin to senior colleagues in clinician-machine teams
- Disclosure should be a matter of well-informed discretion
- AI tools that work for users need to be designed with users
- AI tools need to provide an appropriate balance of information to clinician users.
According to the paper, the recommendations have been based on current and near-future AI decision-support tools and the real-world context in which clinicians are actually working.
‘With AI at the heart of many nations’ healthcare policies, understanding its potential and risks is critical,’ authors say.
‘To translate this understanding into meaningful policy and practice, it is time to move beyond an awareness of the general issues AI raises in healthcare toward much more targeted evaluations of its impact.’
The white paper’s authors urge the government, AI developers and regulators to urgently consider the recommendations.
‘Clinicians should feel confident to reject an AI output that they believe to be wrong, or even suboptimal for the patient,’ the paper reads.
‘They should resist any temptation to defer to an AI’s output to avoid or reduce the likelihood of being held responsible for negative outcomes.’
AI in dentistry prompts calls for indemnity
It also says that AI companies that develop tools for recommendations should complement the software with the necessary indemnity.
‘AI companies should focus more on creating AI tools which provide information to clinicians, over direct recommendations,’ it adds.
‘If they develop and sell recommender systems, in both their contracts and in their public statements they should commit to indemnifying (in full or part) clinicians who follow the AI’s recommendations. [They should also] accept liability for patient harm resulting from AI’s recommendations.’
You can access the full report here.
Follow Dentistry.co.uk on Instagram to keep up with all the latest dental news and trends.
Save 40.0% on select products from L&L First Aid with promo code 40G4RTOE, through 4/11 while supplies last.
Source link