The triple aim of
health care (improve the patient experience, improve population health and
lower per capita cost) remains the driving force in health policy since Donald
Berwick, Thomas Nolan and John Whittington first described it in Health Affairs
in 2008. Information technology is now
being used in ways unimagined in the past to drive the triple aim, often with less
success than had been anticipated. The
challenge has been and continues to be, using information technology effectively
while maintaining the aspects of medical care that require the human touch. The real challenge is how to harness the power
of the computer in what must remain a caring pursuit.
It is clear that our quest to incorporate computers into
medical care and perhaps even direct medical care, risks losing the essence of
medicine – the humanism and caring tradition that should be paramount. I have written before in an earlier post of
my own experience in the hospital and the fact that I felt ignored as the staff
worked diligently to address the need to answer the computer’s demands (My
Recent Hospital Stay and the Care of the Computer). Yet I
know that our ability to properly use information technologies will help
improve medical care. The real question is,
and will be, will the health care system effectively use technology to improve
care and foster the humanism inherent in care or will the technology itself
define a new system which is driven only off the zeal for efficiency and the best
science of disease, but which leaves the hands-on humanistic and spiritual
tradition of medicine in the dustbin of history.
When I was in college, in the late 1960s, early 1970s, I had
a friend who was getting a degree in the early field of computer engineering
and programming. He was a strange sort
of guy who used to rail about the evils of computers. He would talk about how they were actually imbued
with malign intentions and evil spirits.
When asked why then he wanted to go into computer science, he would
answer that someone had to control them in order to defeat the evil that was
inherent in them. In today’s world, and especially
in today’s world of computers in medicine, I wonder if he wasn’t on to
something.
Certainly in popular culture the ideas of computers having
an evil dimension and even dominating humans is examined both by those at the
cutting edge of science and by those in the arts. Physicist Steven Hawking has been one of the
more vocal scientists who have warned of the risk that computers, via the use
of artificial intelligence, could, “spell the end of the human race.” Bill Gates and Elon Musk have also voiced
their concerns in calling for more research on the potential for computers to learn
to “think” for themselves and evolve themselves and where this could
potentially lead including the possibility of information technology
controlling human action. As recently as
June of this year, Steve Wozniak, co-founder of Apple, created media buzz by
declaring that in the future, humans will be the pets of computers. Science
fiction has explored this concept for many years, with Isaac Asimov inventing
the “Three
Laws of Robotics” which aimed to protect mankind from the control of machines. He then proceeded to build stories to show
the inadequacy of the three laws in protecting humans and humanity. As
written by Asimov the three laws are:
“A robot may not injure a human
being, or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human
beings except where such orders would conflict with the First Law. A robot must protect its own existence as
long as such protection does not conflict with the First or Second Law.
The laws all stress the need to help men and women however Asimov’s
stories show the reality of unintended consequences when good intentions can
cause bad even evil results. The reality
is that when human arrogance believes that control can be total over the world
around us (including the technology we create) the results often can prove us
wrong in painful ways. In medical care,
this may prove to be especially dangerous as the results are highly personal
and could be life-threatening. Medicine
is filled with low probability, high consequence events and information
technology is invariably designed for populations, rather than the black swans –
the unpredictable, rare but high impact event that can radically affect a
person’s life. Our systems approaches to
date have also not taken into account the values, beliefs and social structures
that we all live under.
While the three laws of robotics do not directly address the
same issues as the triple aim, the idea that certain hard wired goals, or laws
can address all eventualities, and all permutations, is similar. Both the triple aim and the three laws of robotics
are inherently good: however any laws developed by man that are ultimately
taken as holy writ and hard wired into computer systems, can be interpreted in
such a way as to create pain for individuals.
In medicine, one of the dangers of technology being programmed to
address certain components of the triple aim is that while service and quality
are implied in the first aim, it is not stated clearly who defines either
service or quality. More and more we
have studies that show that the “system” definitions as determined by those who
design and run the health care organizations are different than physician and
nurse definitions which are different than patient definitions. A certain decision algorithm, being driven by
information technology, may not support the goals of population health and
lower per capita cost but, due to the unusual nature of the disease and of the
patient’s psychosocial situation, may help that person in need.
David Shaywitz, one of the best thinkers in health care and
a blogger for Forbes, recently wrote a short blog entitled, “First,
We Devalued Doctors; Now, Technology Struggles to Replace Them” in which he
describes the challenge of trying to
have technology drive personalized medicine which depends so much on knowing
the psychosocial dynamics of each person being treated. He writes, “I realized there was something
that seemed a little sad about the idea of developing extensive market
analytics and fancy digital engagement tools to simulate what the best doctors
have done for years – deeply know their patients and suggest treatments
informed by this understanding.”
I agree. It is sad that
society may be abdicating the sacred trust of knowing the person to a computer rather
than to a caring professional. But it is
not too late to change the new paradigm being written. We can
effectively find a way to control the computer and use the capabilities
inherent in that technology to augment the humanism of the professional helping
those who are in need. We can prevent a
purely technological approach to the triple aim going the way of the three laws
of robotics in literature and being the fodder for tragic stories of individual
pain. It will take new information technology and new approaches that are carefully designed to foster humanism, as defined by the patient and
the family. Caring can be improved if we
learn how to use information technology in a way that supports and helps our
professionals focus more on understanding each patient as a unique individual
and not just a set of pathologies. My strange friend from forty years ago had
confidence that he could fight the inherent evil of computers and I too am confident
that we take harness the power of technology to improve the professionals’
ability to care for patients.
No comments:
Post a Comment