AI, ethics and the human touch in medicine

6 minute read

With every new application will come a new duty of care or danger. Don’t be too entranced by the hype.

Anyone who recalls the “turbo” trend of the 1980s will have a foreboding sense of déjà vu about the sudden ubiquity of artificial intelligence in every aspect of our lives.

What began with the Swedish motor manufacturer, Saab, unveiling its 99 Turbo Coupé in 1977, suddenly had manufacturers scrambling for their own version of the latest technological buzzword in everything from alarm clocks to hairdryers.

The craze reached its apotheosis with the spoof launch, by Viz magazine, of the Satsuma Castanet XR4 Turbo supercar, which promised potential buyers that, if they bought one, they would never sleep alone again.

If you believe the latest hype around AI, then the screamingly unreliable chatbots with whom you are invited to engage while trying to renew your home insurance, are poised to take over the world, consigning humanity to the role of trussed-up gimps.

While most of the general chatter around the technology can, most charitably, be dismissed as “uninformed”, there are already some encouraging signs of its potential to change the world of public healthcare provision, substantially for the better.

This month, it was revealed that researchers at the University of Aberdeen, NHS Grampian, and an industry partner had made a significant breakthrough in using AI to speed up the detection of breast cancer.

Their three-year project, funded by UK Research and Innovation, involved analysing 220,000 mammograms from more than 55,000 patients, using artificial intelligence-powered breast-screening technology.

It proved highly effective at identifying potentially missed cancer cases, known as interval cancers, that would have remained undetected under current screening procedures until patients developed symptoms. This breakthrough could lead to recalling 34.1% of women who developed interval cancers.

Meanwhile in another study of 80,000 mammograms of women who had undergone screenings in Sweden, an AI-assisted system detected 20% more cases of cancer than human radiologists.

The global market for AI in medical diagnostics is experiencing significant growth, projected to reach $2.85billion in 2023 – up by 43.1% on the previous year. The market is expected to grow by 42.5% annually over the next four years, when it will be worth $11.75billion.

It includes services such as medical imaging, robotic process automation, machine learning, natural language processing, and rule-based expert systems, along with sales of various medical devices used in providing AI-based diagnostics services.

AI also has the potential to identify, and even design, up to 50 new therapeutic drugs in the next decade, representing a $50billion opportunity for the global pharmaceutical industry, with a 20-40% reduction in pre-clinical costs.

There is fevered speculation about how the technology might be used in other areas of medical technology to improve patient outcomes and reduce the burden on hard-pressed, and often under-resourced, medical practitioners.

Imagine a future where AI-powered smartphone cameras could analyse skin conditions, providing instant feedback on potential issues. By enhancing early detection, AI can save countless lives and improve overall healthcare outcomes.

It may also soon be able to contribute to the development of personalised medicine, with treatment plans tailored to each individual patient’s unique characteristics, including their genetic makeup, lifestyle factors, and medical history.

By analysing vast datasets and applying predictive analytics, algorithms can identify optimal treatment options and predict potential responses to specific therapies. The integration of artificial intelligence in surgery, has the potential to optimise treatment efficacy, minimise adverse effects, and improve patient satisfaction.

However, while pharmaceutical corporations, MedTech manufacturers and national healthcare providers see only the benefits of huge cost savings, quicker and more effective ways of treating more patients and better health outcomes, not everyone shares their unequivocally optimistic view.

AI will undoubtedly change the roles and responsibilities of medical professionals, with routine diagnostics being handled by AI systems, freeing up doctors to focus on more specialist and complex tasks. Just as motor mechanics have shifted to specialise in managing on-board computing systems in cars, medics may need to adapt and evolve their roles to work alongside AI systems.

For this they will need to learn new skills and some employers may struggle to find enough qualified AI experts, or be prepared to pay a premium, to secure their services.

The likelihood is that many AI developments will end-up being concentrated in a few consultancy firms, leading to potential monopolisation of expertise, and potentially an increased cost for end-users.

The growing use of AI in healthcare also raises important ethical considerations. While technology can provide data-driven recommendations and diagnostics, it lacks the capacity for empathy and sensitivity.

The human touch is crucial in healthcare, especially when addressing mental health issues. A balance must be struck between using AI for efficient diagnostics and ensuring that human, healthcare professionals maintain a central role in providing emotional support and understanding to patients.

A recent study by University of Arizona Health Sciences found that 52% of participants would choose a human doctor rather than AI for diagnosis and treatment. Researchers found that most patients aren’t convinced the diagnoses provided by AI are as trustworthy of those delivered by human medical professionals.

The omnipresence of AI monitoring and its capacity for data collection may also raise concerns about privacy and individual agency.

While AI may offer real-time insights into our health and provide us with tailored recommendations, it also poses a risk of intrusion and control over personal lives. Patients may experience anxiety and uncertainty when AI systems predict their future health outcomes, leading to potential psychological impacts.

Some of the benefits of a healthcare system are in its powers of mediation to protect patients from the impact of existential reality.

A system which can tell an 18 year-old patient, using AI, that they are going to have a lifestyle-related cancer by the age of 40, will also have a duty of care to ensure the patient is psychologically equipped to deal with such news, and also to present them with advice on how to change their lifestyle.  

While some people may want to hear how long they have to live, years in advance, the main beneficiaries of such information will be life insurance companies who will see an opportunity to deal increasingly in greater certainty, while minimising, or even eliminating, risk.

While there will be some patients who believe that having a chip implanted under their skin, that permanently monitors their health, will drastically improve their life chances, there will be others for whom it is an unacceptable intrusion.

While we may still be in the early days of AI in healthcare, already there is an urgent need to establish ethical guidelines and regulatory standards to govern its use.

Ensuring transparency, accountability, and patient autonomy must be prioritised. Decision-making algorithms must be explainable and unbiased, and the data collected must be handled responsibly and securely to protect patient privacy.

Times and attitudes change and driving a Satsuma Castanet XR4 Turbo would now be considered distinctly antisocial. Creating a solid framework for the ethical and manageable implementation of AI in healthcare will ensure it doesn’t follow the same pathway.

Ivor Campbell is CEO of Callander-based Snedden Campbell, a specialist recruitment consultant for the medical technology industry.

End of content

No more pages to load

Log In Register ×