WHO warns on use of ChatGPT, Bard in healthcare
WHO warns on use of ChatGPT, Bard in healthcare
Share:

GENEVA: Carefully analysing the risks involved is imperative while using artificial intelligence tools like ChatGPT, Bard, and Bert in healthcare, the World Health Organisation said on Tuesday, May 17.

While the WHO is enthusiastic about the appropriate use of technologies, including the created AI tools to support health-care professionals, patients, researchers, and scientists, it expressed concern that "there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with large language model tools (LLMs)".

LLMs that mimic comprehending, processing, and creating human conversation include ChatGPT, Bard, Bert, and others. "This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision, and rigorous evaluation," the international health organisation said in a statement.

To safeguard people's health and lessen inequality, it was emphasised, "risks must be carefully considered when using LLMs to improve access to health information, as a decision-support tool, or even to improve diagnostic capacity in under-resourced settings."

According to the WHO, "rapid adoption of untested systems could lead to errors by health-care workers, harm patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies."

The data utilised to train the artificial intelligence models may be biased, leading to the generation of misleading or false information, raising problems for health, equity, and inclusivity, according to the WHO.

The LLMs are likewise likely to produce responses that may seem credible and authoritative to a user, but these responses could also be wholly inaccurate or contain major mistakes, particularly when they relate to health.

Further, as per the WHO, artificial intelligence may not preserve sensitive data (including health data), and it may even be used to create and spread extremely persuasive misinformation in the form of text, audio, or video content that is difficult for the general public to distinguish from reputable health information.

Before its broad use in routine medical care and treatment, whether by individuals, care providers, or health system administrators and policymakers, the WHO urges that these concerns be addressed and clear proof of benefit be measured.

Lack of deep sleep increases risk of stroke and Alzheimer's

Dengue Prevention Day 2023: How to Protect yourself from Dengue

 

Join NewsTrack Whatsapp group
Related News