Does Artificial Intelligence Help or Hinder Healthcare?

1 year ago 61
AIMedical professionals are debating the potential benefits and risks of using artificial intelligence in the healthcare sector. Credit: mikemacmarketing / Wikimedia Commons CC BY 2.0

Since the release of ChatGPT in November last year and other similar platforms, there has been a flurry of excitement concerning the possible applications for artificial intelligence (AI) in a wide breadth of areas, including healthcare.

Proponents of the new technology, are optimistic that AI could revolutionize healthcare. For example, it could be used to assist with diagnoses or to cut down waiting times at local doctors’ practices. Patients might even be able to obtain a diagnosis without leaving the comfort of their own home.

However, some medical professionals and tech experts have urged patients to be cautious. There are growing concerns that AI applications have made misdiagnoses and are providing improper medical advice. The prevailing advice is that patients should seek the advice of qualified (human) health professionals.

Artificial intelligence and healthcare

Out of all the AI applications currently on the market, ChatGPT is probably the best known, although tech giants like Google and Microsoft also have their own applications, and Meta recently shifted its focus towards AI.

Even before popular platforms like ChatGPT came to the forefront of public consciousness, there was growing excitement that AI could soon play a positive role in the healthcare sector.

In 2018, for instance, Lab100, a prototype medical clinic and research lab, was created by Cactus, a design studio based in Brooklyn, in collaboration with Mount Sinai, a medical school based in New York. The innovative clinic employs AI and data capture techniques to meticulously generate a profile of an individual’s physical and mental health.

The experimental lab features different stations that patients can visit to assess various categories of their health.

Elsewhere, healthcare professionals have experimented with how AI can be used to supplement more traditional medical practices.

Two surgeons from the Untied Kingdom’s National Healthcare Service (NHS) have started using Microsoft’s Azure cloud-based software and its machine learning and AI dashboard, to organize waiting lists, assign staff their duties more efficiently, and communicate with patients.

“This is a really exciting project that we hope can help the NHS nationally at a time when the service is facing increased demand and a backlog of operations”, said one of the surgeons, Professor Mike Reed during an interview with Industry Europe last year.

This is just a snapshot of the potential healthcare applications for AI, but medical professionals are certainly interested to see what technological developments might be relevant to their practice.

Problems

Despite this wave of optimism, there are concerns that AI might be misused.
This is particularly true in cases where patients with limited medical knowledge of their own have turned to AI applications like ChatGPT to act as their own personal medical assistants.

The main worry is that the AI might be giving false or misleading medical advice to individuals who would otherwise seek the help of a trained medical professional.

Jeremy Faust, an emergency medicine physician at Brigham and Women’s Hospital in Boston, tested ChatGPT’s ability to provide a diagnosis for a fictional patient. Initially, the diagnosis looked legitimate; however, when Faust looked for the sources cited by the AI, he found that they did not exist.

Concerns like these have led to some medical professionals to call on the relevant authorities to implement minimum standards of quality for next generation AI technologies when dealing with healthcare.

Google has issued its own warning of sorts against AI-generated medical misinformation, even as its own efforts to develop a popular chatbot-integrated search feature are underway.

“On topics where information quality is critically important—like health, civic, or financial information—our systems place an even greater emphasis on signals of reliability,” reads an updated passage from Google Search’s guidance about AI-generated content.

As AI looks set to play an increasingly important role in a diverse array of sectors, including healthcare, finance, logistics, and more, the way in which this technology is implemented regarding reliability, factuality, and ethics, will determine whether it becomes a help or a hindrance in our daily lives.

Read Original