WHO cautions on use of ChatGPT, Bard in healthcare; proposes to address concerns before use

WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine health care and medicine

WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine health care and medicine
WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine health care and medicine

Untested AI-based tools could harm patients, WHO warns

The World Health Organization (WHO) has said that carefully examining the risks involved is imperative while using artificial intelligence (AI) tools such as ChatGPT, Bard, and Bert in healthcare.

The WHO’s concerns against the AI tools include that data used to train the AI models may be biased, thus generating misleading or inaccurate information which could pose risks to health, equity, and inclusiveness.

While the WHO is enthusiastic about the appropriate use of technologies, including the generated AI tools to support health-care professionals, patients, researchers, and scientists, ” there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with large language model tools (LLMs)”, it said.

LLMs include ChatGPT, Bard, Bert, and others that imitate understanding, processing, and producing human communication.

“This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision, and rigorous evaluation,” the global health body said in a statement.

“It is imperative that the risks be examined carefully when using LLMs to improve access to health information, as a decision-support tool, or even to enhance diagnostic capacity in under-resourced settings to protect people’s health and reduce inequity,” it added.

The WHO said that “precipitous adoption of untested systems could lead to errors by health-care workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies”.

The LLMs are also likely to generate responses that can appear authoritative and plausible to an end user and these responses may also be completely incorrect or contain serious errors, especially for health-related responses.

Further, the WHO said that AI may not protect sensitive data (including health data), it can misuse data to generate and disseminate highly convincing disinformation in the form of text, audio, or video content that is difficult for the public to differentiate from reliable health content.

“WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine health care and medicine — whether by individuals, care providers or health system administrators and policy-makers,” the statement said.

[With Inputs from IANS]

PGurus is now on Telegram. Click here to join our channel and stay updated with all the latest news and views

For all the latest updates, download PGurus App.

LEAVE A REPLY

Please enter your comment!
Please enter your name here