Skip to Content
SBMI Horizontal Logo

Rapid Usability Assessment

One problem with almost any assessment (or “evaluation”, we’re using the words interchangeably here) is that it’s “resource-intensive”, which is a fancy way of saying it requires significant amounts of time, money and effort. The more in-depth the assessment, the more resources are required. In the real world, the chances are you don’t have time – typically months – or the money to hire a testing staff or pay for outsourcing a full-scale evaluation. You need answers quickly in order to keep your project on schedule. But if you don’t evaluate, you might miss a needed correction, or even worse, not get certified.

NCCD has examined methods for assessing usability, and these methods have been discussed before. We’ve condensed the process to two specific methods that, when used together, give the most bang for the buck in the shortest amount of time. This process is call the Rapid Usability Assessment.

How does it work?

(1) “Time on Task “: a model-based assessment -- After much research, NCCD has settled on the time required to complete a task as a central outcome measure. This “time on task” is a primary component of productivity and other measures of a system’s efficiency and effectiveness. NCCD uses an open source program called Cogtool, which models an expert user’s key stroke level interaction with the interface. It follows the physical and mental actions required to perform a meaningful use task, then calculates the time required for each of these individual tasks.

(2) Expert Review -- In this approach, usability experts (such as those at NCCD) perform very specific, meaningful use tasks on the system being tested. They assess the system for conformity to established principles of usability (called “heuristics”).

Ok, all well and good, you say, but these tests report an ideal time on task, as measured by a tool, or evaluated by an expert working in a quiet office. Users will want to know how the system will perform in the real world - the stressful, interruption-prone conditions on the hospital floor?

We understand every work environment is different. For that matter, every user is different. Every workday is different, every job and session is different, each with its own stress level and set of interruptions, and sometimes a task is performed by more than one user, and so on...

The point is you can’t predict exactly how a system will perform in every situation. What you can do is establish a baseline for system performance in ideal circumstances, with one user. That way, you can have a fair standard to compare one EHR system with another. While other resource-intensive techniques can be used to assess systems in the real world, the RUA is a diagnostic tool that will quickly and efficiently give you solid criteria for meaningful use: time on task and conformity to proven principles of usability.

SHARPC Logo