GMC study reveals doctors’ views on the use of AI in medicine

GMC study reveals doctors’ views on the use of AI in medicine

Doctors who use AI see benefits for their own efficiency and for patient care and feel confident managing its risks, according to a qualitative research study commissioned by the General Medical Council (GMC).

The research gives further insights following a study by the Alan Turing institute, published in October 2024, in which nearly 1,000 doctors were surveyed on their experiences with and perceptions of AI in their practice.

It revealed that 29% had used some form of AI in the past 12 months, and 52% were optimistic about its use in healthcare. From this research, a sample of survey respondents were approached to take part in the in-depth study.

Community Research carried out a series of in-depth interviews with 17 doctors who had used AI in the last 12 months, to find out more about the types of AI they were using, how well they understand the risks and what they do if they disagree with the output of an AI system.

Many doctors interviewed felt that NHS IT systems would need to improve to pave the way for a broader roll out of AI technologies, noting that many are highly specialised and still in the development stage.

Shaun Gallagher, director of strategy and policy at the GMC, said: “It’s clear that AI’s use in healthcare will continue to grow and projects like these give valuable insights into how doctors are using these systems day-to-day.

“These views are helpful for us as a regulator, but also for wider healthcare organisations, in anticipating how we can best support the safe and efficient adoption of these technologies now, and into the future.”

Researchers spoke to a variety of doctors at different career stages and across specialties, from doctors in training, to consultants working in general practice, radiology and emergency medicine.

Generative AI was used for administrative support, producing clinical scenarios and generating images for teaching, but doctors said that they did not input confidential information.

Doctors using decision support systems in primary care did so to help prioritise which patients to see, find medication conflicts in prescriptions and suggest diagnostic tests. In secondary care, some ways used included to assist in assessing stroke patients and administering local anaesthesia.

But doctors also understood that the emergent technologies presented risks. They saw potential for AI-generated answers to be based on data that could itself be false or biased. They also acknowledged possible confidentiality risks in sharing patient data and the potential for over reliance and deskilling.

Many said that they feel confident to override decisions made by AI systems if necessary, and that ultimately the responsibility of patient care remains with them.

Some speculated that this may change in the future as systems become more sophisticated and looked to regulators, such as the GMC, for more guidance going forward.

Research commissioned by The Health Foundation, published in July 2024, found that 76% of NHS staff support the use of AI to help with patient care and 81% favoured its use for administrative tasks.

Source link

Exit mobile version