healthcaretechoutlook

Value Demonstration in Healthcare Simulation: Linking Training to Performance

By Lisa T. Barker, Director of Education Division, Jump Trading Simulation and Education Center (Jump)

Lisa T. Barker, Director of Education Division, Jump Trading Simulation and Education Center (Jump)

Simulation in Healthcare

Since the unveiling of the first human patient simulator (“Sim One”) in 1967, the technologies engaged under the umbrella of healthcare simulation have advanced dramatically. 40 years later, there are a multitude of simulation strategies, ranging from progressively more realistic human patient simulators that can now simulate a breadth of clinical pathology, to the rapidly evolving platforms for augmented and virtual reality and avatar-based serious games.

As with other advancing technologies, the associated implementation costs rise and fall as initial solutions are replaced by more robust versions. Because of the complex nature of individual learning and associated behavior change, however, the relative benefit for each modality is an area of active exploration in the research arena.   Similarly, healthcare system leadership now seeks to understand the value of investment in simulation technologies within their own contexts. Thus, there is impetus for healthcare simulation to demonstrate its value at a local level.

A Value Framework

The value “equation” for healthcare is described as (Quality + Service) / Cost.  The limitation of this analysis is in the nature of the variables—all are lag measures that reflect the result of daily clinical actions influenced by both individual competencies and system factors.  To move those lag measures, we must first identify the lead measures - supporting behaviors that define optimal performance, as well as rectify any system factors that negatively impact the desired performance. Healthcare simulation has the opportunity to both identify system factors and improve individual and team competencies. The key to demonstrating the value of simulation-based interventions is in identifying their linkage with clinical metrics.

Internally at our large-scale simulation center, we have adopted the published framework from Phillips et al, which describes four levels of value that support a measurable return on investment. The practical reality for healthcare simulation is that there is opportunity to demonstrate value at all four outcome levels:

1. Reaction : Reflects employee engagement and is predictor of behavior change

2. Learning :Reflects individual and/or team competency

3. Application: Identifies the improved clinical behaviors that move clinical metrics

4. Impact: Identifies measurable benefit to the system through cost savings, cost avoidance or intangible measures such as patient satisfaction

"The key to demonstrating the value of simulation-based interventions is in identifying their linkage with clinical metrics"

Using the framework, the chain of impact demonstrating these multiple levels of value for an intervention addressing Catheter Associated Urinary Tract Infections (CAUTI) could look like this:

1. Reaction: 95 percent of learners rated program as highly relevant and very important to their professional success

2. Learning: 100 percent of learners able to correctly insert catheter using simulator

3. Application: 90 percent of direct observations of catheter insertions on the clinical units maintain sterile technique. 

4. Impact: Data analytics report annual CAUTI incidence reduced by 72 events, at a cost avoidance of $8200 each. 85 percent of employees Agree/Strongly Agree they receive the training they need to do their job on annual opinion survey.

The above example illustrates the potential for simulation-based interventions to create demonstrable value in healthcare. Consistently evaluating programs using this framework of value will also allow for the comparison of the effectiveness and efficiency of the many available simulation technologies. 

Linking Training Outcomes and Clinical Performance

Linking training outcomes with clinical performance should be bi-directional.  Performance in the clinical space should drive training initiatives when individual competencies are identified as the relevant gap. Concurrently, demonstrated competency in the learning environment should be compared against subsequent clinical performance. This may be achieved through interfaces between electronic medical record (EMR) platforms, learning management systems, and analytics teams that identify correlations between individual training outcomes, gaps in clinical performance, and patient outcomes.

An opportunity exists for such a platform to be created. This would require HIPAA-compliant data management and, to drive positive behavior change, the data would need to be analyzed, and disseminated within a culture of safety.