Evaluating EHRs for Usability

Evaluation is a critical element in the success of any project, both for developers and buyers.

Developers need formative evaluation, quick feedback for making mid-project corrections or for improving the next version. Formative evaluation answers vital questions, like does this part work? Is this process too complicated?

On the other hand, both buyers and developers are interested in summative evaluation, an assessment of a system’s overall effectiveness, which could help predict how a system would fit into a specific clinic’s unique workflow.

An evaluation should answer questions and reduce uncertainty. It’s easy to answer questions about the concrete items: feature set, system compatibilities, cost and so forth - hard facts that should be readily available. But how do you evaluate something so important, yet seemingly so hard to define, as usability?

NCCD has put a great deal of time and effort into analyzing usability and specifying and defining its components. These attributes - useful, usable and satisfying – might seem intangible at first glance, but they can be defined and more importantly, they can be measured. Here’s how:

  • Useful– Does this EHR get the job done? This is usually the easiest part: look hard at the feature set, system compatibilities, and so forth.
  • Usable 
    • More complicated, but there are metrics for this also. First, is the EHR learnable? Exactly how difficult is it to learn, and how long will it take the “average user” to become proficient? 
    • Is it efficient to use? In other words, how much effort does it take to get things done? You can measure the time to accomplish a task, the number of steps for a task, the physical and mental effort required, the success rate, etc.
    • And finally - because users will make mistakes - what is the system’s tolerance for error? Does the system help users prevent, catch and recover from errors? All this can be measured, using Turf tools.
  • Satisfying– The most “touchy-feely” quality of all. For this you need to ask the users, typically in a focus group or with a survey.

All of these qualities are essential for success. For example, a wonderfully usable system might not be useful, in which case it will be difficult to get the job done. And vice versa – if a useful system isn’t usable, then people will eventually get frustrated trying to learn it or feel like using it is a waste of time. Finally, if working with the EHR isn’t satisfying, how long before users start to avoid using it and eventually find excuses for not using it?

NCCD uses numerous methods for evaluating usability, and more importantly, we understand when each method is appropriate. In general, testing falls into two categories: evaluation by usability experts, and testing that involves real users. Experts will evaluate a system according to proven usability principles, by analyzing the environment in which it’s used, and by the application of special software tools such as Cogtool. With user-based testing, on the other hand, trained observers watch users work with an EHR, or ask users for feedback after or while they are using the system (using surveys, focus groups, talk-aloud techniques, etc.).

Finally, when is the best time to start usability testing?

You already know the answer to this one: as early as possible. In the design phase, certainly before prototyping, engineers should be thinking usability and be familiar with the guidelines. Usability is a state of mind, an attitude.

“But we already have a working product! We’re in alpha-testing.”

It’s not too late, and in fact NCCD has developed a system to get rapid, accurate feedback on your product before you go to market. And if the product is already out there, usability engineering will help you improve the next release.

As the old adage says, you never get a second chance to make a first impression, so it’s best to get things right the first time. Early investment in usability engineering will make that first impression a good one.

SHARPC Logo