Sheldon H. Jacobson and Dr. Janet A. Jokela: What should we fear with AI in medicine?

Will the threats associated with artificial intelligence be as bad as some fear? Or will AI be relatively benign? Could the answer be somewhere in between?

Perspectives on AI abound. Whether it be in medicine, security or education, new applications in search of an AI advantage continue to grow. This has prompted calls for well-intentioned restraint and regulation or, at the very least, slower growth and proliferation.

To alleviate some of that fear, we should consider one area in which AI is proving to offer significant benefits and potential: medicine and the delivery of health care. A recent article in Medscape Medical News highlights studies that give AI the advantage in delivering more precise and reliable medical care. Eric Topol, founder and director of the Scripps Research Translational Institute, has argued that the future of medicine lies with AI, with benefits such as a reduction in medical errors and delivery of more robust diagnoses and treatment plans.

Some will argue that AI has no feelings and therefore cannot replace functions that demand human interactions, empathy and sensitivity. While it is true that AI has no feelings or ethics, AI medical systems do not need to feel; their patients do. And what patients want and certainly need from their physicians is their time and their attention, which demands patience, something that AI systems have in abundance. Indeed, patience may be construed by some as a surrogate for human empathy and sensitivity, while impatience may be interpreted as the antithesis of such human characteristics.

As corporations buy medical practices, ultimately influencing the practice of medicine and the delivery of health care services, physicians and health care providers are pushed to squeeze more health care dollars into tighter time windows. This provides an opening for more misdiagnosis and poor health care delivery.  Indeed, the inherent conflict between money and service will only continue to grow as more health care practices are purchased by corporations, with shareholder interests as the overriding objective.

AI medical systems can process information infinitely more quickly than any human clinician. AI medical systems also have access to and can digest many times more medical data and knowledge than human physicians and clinical providers. This means that an AI medical system may spot an unusual condition that could expedite a diagnosis, identify an appropriate treatment plan and save lives — all at a lower cost. They may even identify a novel condition by exhaustively eliminating the possibility of all possible known diseases, effectively creating new knowledge by a process of elimination.

Yet AI medical systems have their limitations and risks.

The plethora of data being used to train AI medical systems has come from physicians and human-centric health care delivery. If such sources of data are overwhelmed by AI-generated data, at some point, AI medical systems will be primarily relying upon data generated from AI medical care. Will this compromise the quality of care that AI medical systems deliver?

Then there is the fundamental understanding of how AI medical systems work. Much of the output is observational based on complex statistical associations. Few, if any, medical personnel understand such models, how these models use data and how their outputs are obtained. Of course, much of clinical medicine is evidence-based, which in turn is based on clinical trials or extended observational experience. When viewed in this context, AI medical systems are taking a similar approach, with the time window to glean insights infinitesimally compressed.

Then there are the issues of data bias and privacy.

Medical data is inherently biased since it comes from a biased world. To cleanse such data would change the data, with unexpected consequences that may bias AI medical systems in unexpected ways. It may even compromise the efficacy of such systems. In the short term, if data bias issues are to be addressed, they should be managed at the back end, much like how human systems manage them today. The long-term objective is more complex, to have the AI systems themselves prune such biases in, shall we say, an unbiased manner.

The other issue of concern is data privacy, which appears overstated and often amplified, stoking fear.  Privacy safeguards should always be considered, yet there are no foolproof ways to guarantee complete and total privacy.  Many people inadvertently sacrifice personal privacy for personal convenience often unthinkingly.

People often confuse personal privacy with personal control of their data. Yet permitting our personal data to be accessed with our blessing, such as we do when using social media, does not keep us any safer than if our data is accessed unknowingly by others.

AI medical systems need anonymized data as inputs. Protecting the integrity of the anonymization process is what we can reasonably expect.

Anything that cannot be easily understood may elicit fear. AI certainly qualifies. In a world filled with uncertainty and risk, AI systems of all kinds offer tremendous benefits. Yet the uncertainty and risk that surround us will not miraculously go away with AI. There are no free lunches in this regard.

Prudence and caution are reasonable. Efforts to stop or even slow AI advances are what we should really fear.

Sheldon H. Jacobson, Ph.D., is a professor of computer science at the University of Illinois at Urbana-Champaign. Janet A. Jokela, M.D., MPH, is the senior associate dean of engagement for the Carle Illinois College of Medicine at the University of Illinois at Urbana-Champaign.

Submit a letter, of no more than 400 words, to the editor here or email letters@chicagotribune.com.

Related posts