How to Evaluate Training Effectiveness (with the Right Metrics)

Let’s start by why you need to measure training: to make sure you’re using the best training methods and tools available, in the best way possible that is best fitted for your users. Or, in other words – to ensure you’re getting results.

 

Consequently, the only way to measure some training methods is after the fact, by checking performance and product adoption rates. This applies to training methods that are detached from actual work in the product – frontal training in class, webinars and video training. There is simply no way of checking if the training was effective before users start applying what they’ve learned to hands-on work in the product.

 

In-app online training, however, can be measured and tested live. If you combine live metrics with performance metrics, you get the ultimate training effectiveness measurement that tells you what you need to know about your training plan.

 

soccer players laid back stock evaluating training effectiveness

In-training metrics: Guide Metrics

Online training platforms are much more flexible in revealing the training process while it is happening. It’s much easier to tell whether users are engaged in training,

 

Main metrics to track for in-app, online training:

Guides opened

How many and which guides were activated.

 

Guide completions

Opening rates don’t mean the users saw them through. Completions do, though.

 

Drill down user analytics

Tracking each/specific users’ guide launches.

 

Specific events

Guides that require specific events (clicking, signing up, etc)

 

Iridize user activity dashboard

 

In-training metrics: User Feedback

In-app online training also allows you to gather feedback while the training is happening – ask users to:

Grade guides

Rate overall satisfaction

 

This is a great practice in more ways than one: getting feedback “while it’s hot” is always better. For one thing, users are more inclined to offer it, certainly more than with an email requesting feedback 3 days later. Another advantage is that the experience is still fresh in their minds and you are more likely to get an earnest, accurate response.

 

iridize feedback dashboard - survey results

 User feedback – results and stats

 

Post-Training Metrics: Support

What does support have to do with training metrics? Everything. Ineffective training results in barrages of support requests and users who lean heavily on support reps to navigate through the software. The user *still* need guidance and help, they just look for it outside the training framework.

 

There are two support metrics that can help you determine the success of a training program:

 

Support requests

Increase or decline in support requests (with regard to the training topics), right after software implementation.

 

Guide activation

If you are using in-app onboarding tools as part of an online training program, you will witness a decrease in guide activation over time. This is normally a good thing, because it means the software adoption was successful and users are gaining confidence, no longer needing the support tools.

 

You may ask – wouldn’t I need to check the effectiveness of the online guides? What if the usage decrease is really because users aren’t getting the help they need from the guides?

No, and here’s why not?

 

  1. The churn would be much faster – users would abandon unhelpful guides almost immediately
  2. Guide metrics factor in – opening rates and completion rates are necessary metrics for guide helpfulness (see item #1)

 

measurement tape - stock training metrics

Post-Training Metrics: Product Usage

The other main way to measure training effectiveness is how well and often users interact with the software. Essentially, these are standard retention metrics, but these are also undervalued training metrics. They can deliver a lot of insight about training success when applied to product adoption at the stage where users graduate from training and are required to start using the product.

 

Events

Events have become the most popular and thorough way to evaluate user proficiency in SaaS platforms. Most analytics tools use it as a base parameter. Events are normally a series of actions that culminate in one specific action that is easy to track: creating items and saving them, generating and exporting reports, visiting certain areas in the platform.

 

Frequency

How often does the user login? When was the last time they logged in?

 

Session duration

How long do active sessions last? Frustrated users won’t last that long, opting out when the frustration from failure to get what they need from the software becomes too much.

 

Identifying usage patterns

For instance –

(~) Novice users spend a lot of time in the app, but they are still insecure and trying to master features, and so they spend much time in the same area (metrical indication: longer sessions on fewer pages.

 

(~) Advanced users spend more time in the analytics section, and are more confident about moving around in the platform to get things done (metrical indication: shorter sessions, multiple pages)

 

Advocacy

Users normally invite their team or colleagues to collaborate on a tool they feel confident with, when they are in a position to answer questions and tutor their peers

 

Post-Training Metrics: Performance

Performance measurement is more relevant when measuring employee training. It’s one of the classic sales training metrics.

 

Most companies have their own performance metrics for that, gathered and optimized over years of growth. Industry standards also play an important role in this. Either way, performance is an indication of successful training.

 

Noa is Iridize's Content Manager. With a background in digital strategy planning and database management, Noa translates Iridize's vision, stories and data into words. Digital learning and user experience are a particular passion of hers.

Enter your email to get new onboarding ideas