Training News

More from ATD 2016: Measuring Your Training ROI

In yesterday’s Advisor, we presented thoughts on bite-sized learning from a session at the annual Association for Talent Development (ATD) Conference and Exposition held recently in Denver, Colorado. Today, another ATD speaker has advice for determining the return on investment (ROI) of training programs.

Patti Phillips, president and CEO of the ROI Institute, began her session with a quick case study. At First Bank, the number of consumer loans go up, and the CEO asks why. Of course, everybody tries to take credit, and it can be safe to assume that if loans went down, the fingers would start pointing.

Multiple factors contribute to changes in performance, says Phillips, but you try to build credibility to the higher-ups and your peers to demonstrate the value of your program. For this to succeed, though, you must be able to tell the boss what caused good performance.

Phillips’ evaluation framework for any new program or initiative has five levels:

  1. Reaction and planned action. How do participants react to the program, and what’s the planned action for change?
  2. Learning. This measures changes in knowledge and skills.
  3. Application. Be sure to measure the program’s implementation, actions, and changes in employee behavior while on the job.
  4. Business impact. How have variables related to business impact changed?
  5. ROI. Finally, determine the monetary benefits of the impact of the program.


Trainers can calculate the ROI of their programs by dividing the net project benefits by the project costs. So, if you implement a project that costs $230,000 and the benefits of that project in its first year add up to $430,000, your ROI would be 87%, like so:

ROI = (430,000-230,000) / 230,000 = 0.87 x 100 = 87%

Of course, in order to determine these numbers, you must isolate the effects of your program, and there are several ways to do this.

3 Techniques for Isolating a Program’s Effects

Control Groups

When demonstrating ROI, the big question is, “What is the difference in the change in performance?” “Whatever your answer is to that question,” says Phillips, “This is what you are going to take credit for.” Control groups can show this difference. A control group of employees does not receive the training program, while the experimental group does. If there is improvement in the experimental group but not the control group, it suggests the program has an effect.

However, try to avoid tipping your hand and allowing other influence of the test, says Phillips. If someone in the control group intuits you’re going for a certain result, he or she may do something different on his or her own, and that will not allow you to isolate the effect of the program.

Trend Line Analysis

This technique will show a change in performance through data, specifically the difference between the projected average data if nothing had happened vs. the actual, postprogram average data.

Phillips says that trend line analysis will only be effective in demonstrating program effectiveness if:

  1. Data are available.
  2. The data are stable.
  3. The forecast in the trend will likely continue.
  4. Nothing else major happens during the evaluation period.


So, ask yourself when considering a trend line analysis: Do you have the data? Is it stable? Is there enough so that you can forecast that the trend will likely continue?


Estimates are just that—estimates. So how confident are you in your assessment? It’s an important question to ask—confidence is a big factor, says Phillips, and that’s why you must use only the most credible sources of data. Do a focus group of trusted employees who have received training, asking them both their consensus on the impact of training and their confidence level that their estimate is correct.

Data Credibility Factors

When calculating ROI and measuring program effectiveness, it’s all about credibility of results—you can’t take credit if no one believes you. Credibility of outcome data is influenced by the:

  • Reputation of the source of the data;
  • Reputation of the source of the study;
  • Motives of the researchers;
  • Personal bias of the audience;
  • Methodology of the study;
  • Assumptions made in the analysis;
  • Realism of the outcome data;
  • The type of data; and
  • The scope of the analysis.