COPENHAGEN: The expanding use of artificial intelligence in healthcare demands stronger legal and ethical protections for patients and medical staff, the World Health Organisation’s Europe office warned in a report released on Wednesday.
The findings come from a study on how AI is being adopted and regulated in European health systems, based on input from 50 of the 53 states in the WHO European region, which also covers Central Asia.
According to the report, only four countries, about 8%, have so far put in place a dedicated national AI strategy for health, while seven more are in the process of developing one.
"We stand at a fork in the road," Natasha Azzopardi-Muscat, the WHO Europe's director of health systems, said in a statement.
"Either AI will be used to improve people's health and well-being, reduce the burden on our exhausted health workers and bring down healthcare costs, or it could undermine patient safety, compromise privacy and entrench inequalities in care," she said.
Almost two-thirds of countries in the region are already using AI-assisted diagnostics, especially in imaging and detection, while half of countries have introduced AI chatbots for patient engagement and support.
The WHO urged its member states to address "potential risks" associated with AI, including "biased or low-quality outputs, automation bias, erosion of clinician skills, reduced clinician-patient interaction and inequitable outcomes for marginalised populations".
Regulation is struggling to keep pace with technology, the WHO Europe said, noting that 86% of member states said legal uncertainty was the primary barrier to AI adoption.
"Without clear legal standards, clinicians may be reluctant to rely on AI tools and patients may have no clear path for recourse if something goes wrong," said David Novillo Ortiz, the WHO's regional advisor on data, artificial intelligence and digital health.
The WHO Europe said countries should clarify accountability, establish redress mechanisms for harm, and ensure that AI systems "are tested for safety, fairness and real-world effectiveness before they reach patients".