“Doctor, what app do you recommend for my condition?”
“Doctor, what do you think about this new technology?”
I receive questions like these almost every week in my clinic, and I unfortunately don’t have great answers to give my patients.
While some digital technologies, such as mobile applications and wearables, are commonplace in the broader world around us, their directed use in health care is less established. The reason is simple: There is generally very poor evidence (if any) that supports the use of these technologies, particularly when compared to traditional treatment strategies, such as drugs and devices.
As a result, in clinical practice, I most often advise patients against using newer health technologies explicitly as part of their treatment or care plan. The “fail fast, fail often” mantra of the technology startup world does not fit well with the “first, do no harm” ethos of the medical profession. This difference in culture, combined with a still-evolving regulatory landscape, has resulted in a proliferation of digital technologies. Today, there are more than 300,000 health care mobile applications currently available—with nearly 200 more being added every day. With a majority of these solutions marketed directly to patients, it is no wonder that patients come to clinic with questions.
To help find better answers, I have partnered with engineers at the Johns Hopkins University Applied Physics Laboratory and faculty members at the Bloomberg School of Public Health to redesign the way health technology is fundamentally understood. We established a framework for more objective, comprehensive, transparent, and standards-based assessments of health technology to ensure that all stakeholders in medicine (patients and families, physicians, payers) have an easier way to know where quality resides.
Our approach is to create a “digital health scorecard”—combining elements of evaluation similar to those found at Underwriters Laboratories and Consumer Reports, nonprofit testing laboratories that evaluate products for safety and quality.
The scorecard makes it clear how a particular solution performs based on four domains. Clinical validation refers to understanding what evidence exists to show how a solution impacts a health outcome. Technical validation compares a solution’s performance against an existing gold standard and also assesses for basic qualities of security, privacy, and interoperability. Usability assesses how well a solution functions in the hands of a user across standard parameters, such as navigation and ease of use. Cost reflects not only the price but also the total cost of ownership, including installation, maintenance, and, where available, an estimation of potential savings.
In addition, we are asking an even more basic question: Does the technology do what the intended end user (patient) wants it to do? In all of these areas, the methodology by which the technology is assessed is transparent, as is the scoring. As a result, anyone viewing the scorecard will know exactly why a solution performed as well or as poorly as it did.
We are actively piloting this approach with industry research funding in the field of mobile applications for cancer. As part of this effort, we have conducted design sessions with patients, families, and all levels of providers (nurses, pharmacists, nurse practitioners, physicians) to specifically understand their needs. We are also collaborating with the Johns Hopkins Technology Innovation Center to help conduct the actual testing of these solutions.
Our hope is that with the digital health scorecard, patients and physicians can more easily sift through the clutter of health technology to identify the solutions that have meaningful clinical benefit.
Simon C. Mathews is head of clinical innovation at the Armstrong Institute for Patient Safety and Quality and a faculty member in the Malone Center for Engineering in Healthcare. This column originally appeared in the fall 2019 issue of Hopkins Medicine magazine.