Safeguard The Use Of New Technologies:
Ensure that the use of Healthcare Technology, Communications & Data, is Safe, Accountable and Only Used for the Benefit of Patients
New medical technology and devices have improved our healthcare considerably in the last 50 years. They have become increasingly vital in keeping us healthy. Great advances have been made. For example, who could argue with the technologically innovative smart ambulances, full of technology that starts to diagnose what’s wrong with you the minute you’re wheeled inside. Technology that sends that critical data to the hospital in advance of your arrival so they’re ready to treat you as soon as you arrive.
We have telemedicine solutions linked with wearables and smartphones to remotely track our health. Healthcare technology has augmented the growth of robotic surgery, remote treatment, and the development of holography. Progress yes, but progress that is also associated with risk for patients.
The technology itself can be a risk. The proven possibility that treatment doses delivered internally, such as insulin pumps, can be hacked to decrease or increase the dosage. Or, Bluetooth technology used to impact the parameters of pacemakers or internal cardiac defibrillators, and tracking systems that can identify and locate such medical devices, even while they are implanted.
However perhaps the biggest risk to patients is the misuse of all the data this technology collects about us. This can happen on a grand scale. For example the DNA testing company 23andMe and pharma giant GSK signed a $300 million deal to use the genetic resources of 23andMe’s 12 million customers for drug development.
Today we have the added risk of medical Artificial Intelligence/Machine Learning (Ai). Ai is a set of technologies and automated systems that are able perform tasks that normally require human intelligence but is faster and more reliable. Its use in healthcare is being hailed with great promise. However, it also comes with great risk.
While it can, for example, more speedily identify and recommend the need for blood tests, or the possibility of developing heart attacks or cancer in patients, so far, a considerable amount of the data it bases its diagnosis on has been laden with inbuilt biases based on gender and ethnicity. The use of such data by Ai is reinforcing and enhancing those discriminations. The medical consequences of those discriminations could well result in false diagnosis and the wrong kind of treatment or intervention.
Ai medical errors raises the question of who should be held accountable. This can leave clinicians and other healthcare professionals in a vulnerable position, especially if the AI model they are using does not include intervention or opinion from them at any point in the process.
CPR accepts that the continuing development of medical technology is critical to the progress of healthcare, but it can cause significant harm to patients if not robustly regulated.