BSV
$68.31
Vol 165.81m
-10.33%
BTC
$99099
Vol 112958.56m
1.72%
BCH
$498.09
Vol 1787.17m
-4.81%
LTC
$91.07
Vol 1325.3m
3.58%
DOGE
$0.39
Vol 9764.7m
3.24%
Getting your Trinity Audio player ready...

As generative artificial intelligence (AI) continues to make inroads in several industries, the World Health Organization (WHO) has raised alarm over the risk posed by the technology to healthcare.

Per a report, the WHO’s main concern lies with large multi-modal models (LMMs), citing its novelty and absence of long-term data in real-world scenarios. LMMs are generative AI models capable of receiving data input from several sources and can generate outputs such as text, videos, or images.

WHO’s director of digital health, Alain Labrique, disclosed that the functionalities of LMMs offer several use cases for healthcare and medical research. Labrique identified five key areas to incorporate generative AI in healthcare, including diagnoses, drug synthesis, and simple clerical tasks.

Other use cases for the technology include roles for patient-guided use and medical education to train health workers. The WHO submits that the reason behind the broad functionality of LMMs lies in the mimicry of human behavior and its “interactive problem-solving” abilities.

Despite the multiple use cases, the WHO issued a grim warning that LMMS may produce inaccurate outputs due to defects in their training data. The WHO warns of the risks of automation bias stemming from the blind reliance on algorithms without seeking a second opinion.

“As LMMs gain broader use in health care and medicine, errors, misuse and ultimately harm to individuals are inevitable,” the WHO warned.

To mitigate the associated risks, the global health agency has rolled out a slew of recommendations to policymakers and healthcare providers. The WHO argues that a proactive approach toward regulation offers the best chance to rein in attendant risks, building on existing regulatory templates.

Top of the list for the WHO is the guarantee of patients’ privacy and the inclusion of users to opt out of AI-backed healthcare services. The WHO is also particular about the security standards employed by LMMs, urging service providers to take steps to prevent security breaches by bad actors.

Scientists should be roped in

A vital area of the WHO’s recommendations is the inclusion of scientists and medical personnel in the development of LMMs. The WHO takes it up a notch by angling for patients to be involved in their development to ensure that AI “contributes to the well-being of humanity.”

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate and use these technologies identify and fully account for the associated risks,” WHO Chief Scientist Jeremy Farrar stated.

AI has been making significant incursions in medicine in recent months, underscored by the reliance on emerging technologies in cancer detection, research, and use in evidence-based medicine.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch: Artificial intelligence needs blockchain

Recommended for you

Sch. Post test

Lorem ipsum odor amet, consectetuer adipiscing elit. Elit torquent maximus natoque viverra cursus maximus felis. Auctor commodo aliquet himenaeos fermentum

November 7, 2024
Post with chaching

Lorem ipsum odor amet, consectetuer adipiscing elit. Accumsan mi at at semper libero pretium justo. Dictum parturient conubia turpis interdum

November 4, 2024
Advertisement