Catalyze Innovation that Advances Health

Methodology

Why We Created the EHR User-Centered Design Evaluation Framework 

The American Medical Association and MedStar Health’s National Center for Human Factors in Healthcare developed this framework, based on the science of user-centered design (UCD), to increase transparency of electronic health record (EHR) vendor usability processes. UCD best practices have a direct bearing on patient safety and clinician satisfaction. We analyzed vendor certification reports submitted for 20 common EHR products (15 ambulatory and 5 inpatient), to the Office of the National Coordinator (ONC) for inclusion in its Certified Health Product List (CHPL). The ONC's design certification criteria is focused on eight capabilities, requires vendors to attest to a UCD process, conduct a summative usability test, and provide details about their testing. This information is publicly available, however the vendor reports are difficult to understand, and it is not clear how the vendor reports compare to UCD best practices. The framework does not evaluate actual usability as experienced by end users.

How This Framework Was Created

We created the EHR User-Centered Design Evaluation Framework by comparing ONC’s UCD certification requirements with evidence-based best practices from the human factors and usability literature. The framework is a collaboration of a research team from MedStar Health's National Center for Human Factors in Healthcare and the American Medical Association. 

Based on the information vendors are required to report and the human factors and UCD literature, we created three dimensions to evaluate vendor reports in the CHPL: User-Centered Design Process, Summative Testing Methodology and Summative Testing Results.

We further divided the summative testing methodology into the following subcomponents: 

  • The number and clinical background of participants
  • Use case rigor
  • Measures of effectiveness, efficiency and satisfaction

We also divided the summative testing results into the following subcomponents:

  • Effectiveness
  • Described areas for improvement

For each vendor report we extracted the relevant information for each dimension and subcomponent for each of the eight capabilities that require testing. We removed any tasks intended for administrator roles and not for clinicians. Since some vendors used multiple tasks to test a particular capability, we averaged across tasks and capabilities. For example, if a vendor tested 16 participants for task A and 12 participants for task B we took the total number of participants to be the average (mean = 14). Similarly, other subcomponents such as effectiveness rating were an average of all capabilities and tasks that were for clinician use.

Our goal in developing this framework is to draw attention to UCD best practices that all EHR vendors should follow. UCD is an essential component in improving the safety of EHRs and the satisfaction of physicians, patients and other medical professionals using these products. The National Center for Human Factors in Healthcare and the American Medical Association believe that all vendors should aspire to achieve a minimum score of 15 for their entire product based on this framework.

Dimension Sub-components Recommendation Scoring

User Centered Design Process

N/A

A user centered design process puts the needs of the user at the forefront of the design and development resulting in a product that is more likely to meet the needs of the user.1-2

  • 5 points for statement of process
  • 0 points if no process stated

Summative Testing Methodology

Number of participants

Fifteen participants will reveal over 85% of the problems when conducting summative testing. 3-4

  • 1 point for 15+ participants
  • .5 point for 10-14 participants
  • 0 points for less than 10 participants

Clinical background

Participants should represent the end-user demographic of the product.2, 5-6

  • 2 points if all clinical
  • .5 point at least one clinical
  • 0 points for no clinical

Use case rigor

The use cases should be as representative as possible of the use cases in the live environment. Use cases should allow evaluation of clinical and usability aspects and include challenging scenario elements.2, 5

  • 1 point for detailed use cases
  • .5 point for vague use cases
  • 0 points for no use case description

Appropriate measures

Usability Measure of effectiveness, efficiency, and satisfaction should be used in summative testing.7

  • Effectiveness8
  • Efficiency6
  • Satisfaction6
  • 1 point if all measures accurately captured
  • 0 points if any measure is not accurately captured

Summative Testing Results

Percent effectiveness

Success rate for first time users during summative testing should be 80-95%.9

  • 3 points if effectiveness is 80% or greater
  • 0 points if less than 80%

Areas for improvement identified

Detailed areas for improvement should be provided to drive the next iteration of design and development.2,7

  • 2 points if substantive description
  • .5 point if mimimally addressed
  • 0 points if no information
Non-Endorsement Disclaimer: In publishing this report, neither MedStar Health nor the American Medical Association is endorsing any EHRs or other technology. This report was not sponsored, funded or in any way facilitated by any EHR or similar technology vendor. User decisions regarding the selection of EHR technologies, products and services must take into account many varied characteristics, which are beyond the scope of this report.

Citations

  1. Norman, D. A., & Draper, S. W. (1986). User centered system design.Hillsdale, NJ.
  2. Nielsen, J. (1994). Usability engineering. Elsevier.
  3. Faulkner, L. (2003). Beyond the five-user assumption: Benefits of increased sample sizes in usability testing.Behavior Research Methods, Instruments, & Computers,35(3), 379-383.
  4. Guidance for Industry and Food and Drug Administration Staff - Applying Human Factors and Usability Engineering to Optimize Medical Device Design, (2012) . US Food and Drug Administration.
  5. Kushniruk, A. W., & Patel, V. L. (2004). Cognitive and usability engineering methods for the evaluation of clinical information systems.Journal of biomedical informatics,37(1), 56-76.
  6. Sauro, J., & Lewis, J. R. (2012).Quantifying the user experience: Practical statistics for user research. Elsevier.
  7. Albert, W., & Tullis, T. (2013).Measuring the user experience: collecting, analyzing, and presenting usability metrics. Newnes.
  8. Nielsen, J. (2001). Success rate: The simplest usability metric.Jakob Nielsen’s Alertbox,18.
  9. Weinger, M. B., Wiklund, M. E., & Gardner-Bonneau, D. J. (Eds.). (2010).Handbook of human factors in medical device design. CRC Press.