Measuring Learning Impact: What Do Executives Actually Want to See? (Part Two)
Part 1 of this series looked at why it’s hard to talk to executives about measurement, and what the research tells us about where learning impact actually lives. The short version: executives aren’t asking for the impossible, the training event itself is rarely where impact is won or lost, and the gap between what L&D measures and what executives want to see is very much closeable.
Part 2 is about how to close the gap without a huge amount of work. What follows draws on decades of peer-reviewed training research and the practical experience of L&D teams navigating this conversation well. None of these moves requires a large budget or a research team. They require a shift in how we think about the work before the training begins.
Move 1: Define success before you design the program
The most common measurement failure happens before a single learning objective is written.
Most programs are designed first, and measurement questions are asked afterward — if at all. By the time someone asks, “How do we know this worked?”, the window for establishing a meaningful baseline has closed. All that’s left is whatever the LMS happened to capture.
The fix? Make the measurement question the first design question. What does success look like for the business? What metric is this program meant to move, and what is that metric today? If you can’t answer those questions before you build, it’s worth pausing to ask whether the program is the right solution at that moment.
This also changes the conversation with business partners in a productive way. When L&D comes to the table asking about outcomes rather than logistics, it invites the business to co-own the measurement. And when the business co-owns the measurement, it tends to co-own the result.
High-performing L&D functions (report) on three kinds of metrics: business outcomes (revenue, productivity, quality), learning outcomes (skill acquisition, behaviour change), and operational efficiency (cost-per-learner, time-to-proficiency). Reporting against all three is what keeps L&D in the strategic conversation.
Move 2: Measure the behaviour and the baseline, not just the training
Pre and post measurement is the single most practical thing an L&D team can do to make learning impact visible.
A post-training satisfaction score tells you how people felt about the experience. A pre-training baseline compared to a post-training measure of actual job behaviour tells you what changed. Those are genuinely different conversations.
At the individual level, a structured assessment or 360-degree feedback before a leadership program, compared to the same instrument six months later, produces evidence that’s hard to argue with. At the organizational level, the baseline is often already sitting in business data (error rates, conversion rates, client satisfaction scores, employee relations incidents) if someone thinks to capture it before the program launches.
The research suggests that when L&D organizations actually measure at the behaviour and results level, they find meaningful effects. The problem, most of the time, is simply that nobody looked.
Move 3: Study the extremes, not the average
It’s worth acknowledging something here: there are well-established frameworks for measuring learning impact comprehensively. Kirkpatrick’s four levels — reaction, learning, behaviour, results — have been the field’s standard for decades, and Phillips’ ROI methodology extends that model to calculate financial return. Used rigorously, they can produce compelling evidence.
But full Kirkpatrick Level 4 measurement is time-intensive, and Phillips’ ROI methodology requires a level of statistical rigour and analytical capability that frankly exceeds what most L&D departments are resourced to deliver. This isn’t a criticism, it’s a practical reality. And it’s one reason why, despite frameworks that have existed for decades, most organizations still measure only at Level 1.
The good news is that there are highly effective, far less resource-intensive approaches. One of the most useful is Robert Brinkerhoff’s Success Case Method, which produces credible, compelling evidence of business impact without requiring a full evaluation infrastructure.
The method is built around a counterintuitive but powerful idea: the most useful measurement story you can tell is not about the middle of your distribution. It’s about the extremes.
In any training program, some participants take what they learned and produce genuinely significant results. Others, equally trained, apply nothing. The average completion rates, mean scores, and aggregate satisfaction make both groups invisible.
The method works in two simple steps:
A brief survey identifies participants who report using their training to achieve real results and those who make no use of it.
Then, in-depth interviews with both groups reveal what actually happened. In the success cases, what specific results were produced and what conditions made that possible? In the non-success cases, what got in the way?
The comparison almost always reveals the same thing: the difference between the two groups often isn’t training quality. It’s the performance environment. Different managers. Different levels of post-training support. Different access to opportunities to actually practise the new skills.
Brinkerhoff’s own example is instructive. A technology company invested heavily in technical training for service technicians. Every technician who completed the training succeeded, including cases in which a trained technician prevented a major customer outage worth millions of dollars. But 40% of trained technicians had never used the training at all, because their managers had enrolled them before the relevant equipment had even been purchased. The waiting list grew so long that technicians who genuinely needed the course sometimes had to wait months and re-enroll after their skills had lapsed.
The solution required no changes to the training. It required changing the enrollment policy. A success case analysis surfaced that. A completion rate never would have.
Move 4: Translate into the language of the business
L&D and executives often describe the same investment in different terms.
L&D speaks in hours completed, modules launched, satisfaction scores, and knowledge retention rates. Executives speak in terms of revenue, cost, risk, and productivity. Both are legitimate. Only one of them lands in the room where budget decisions are made.
The translation is ours to make, and it’s more accessible than it might seem. One simple example: the fully loaded labour cost of training delivery. When 200 employees spend an hour in training, that’s 200 labour hours — a real dollar figure. Framing it that way in a conversation with a business leader immediately repositions training as a capital-allocation decision rather than a support-function activity. That framing tends to raise the quality of the conversation on both sides.
Similarly, translating a behaviour change into an approximate revenue or cost-avoidance number gives executives a denominator for the investment. A targeted shift in advisor behaviour can add measurable revenue within a month. A reduction in employee relations incidents across locations where managers attended a leadership program can be expressed in avoided legal costs and management time. These translations require partnership with the business, but the initiative to pursue them belongs with L&D.
McKinsey’s research on high-performing L&D functions found that organizations with the strongest executive relationships reported on three kinds of metrics: business outcomes (revenue, productivity, quality), learning outcomes (skill acquisition, behaviour change), and operational efficiency (cost-per-learner, time-to-proficiency). Reporting against all three is what keeps L&D in the strategic conversation.
Decades of research on training transfer converge on a consistent finding: the work environment is the strongest predictor of whether trained skills actually get used on the job. Not the quality of the training design. Not the skill of the facilitator. The environment people return to after the learning event.
Move 5: Build the transfer environment, not just the training program
This is the move that makes the biggest difference and gets the least attention.
Decades of research on training transfer converge on a consistent finding: the work environment is the strongest predictor of whether trained skills actually get used on the job. Not the quality of the training design. Not the skill of the facilitator. The environment people return to after the learning event.
The specific factors that drive transfer include manager support, peer support, genuine opportunity to practise, strategic alignment between the training and what the organization actually rewards, and accountability mechanisms that keep new skills visible after the program ends. The finding that peer support is at least as strong a predictor of transfer as manager support is particularly worth acting on, because most organizations design accountability structures for managers and leave peer reinforcement entirely to chance.
If a program design addresses only what happens in the learning event itself, it has covered one of the three inputs that determine whether training transfers. The other two — participant readiness and the work environment — are left to their own devices.
The good news is that this creates a real opportunity. When an L&D team can show that participants whose managers provided structured post-training support achieved meaningfully better outcomes than those whose managers didn’t, that’s not just an evaluation finding; it’s a business case for manager involvement in the learning process. It’s a conversation that belongs at the leadership table, positioning L&D as a strategic partner in organizational performance rather than a provider of training events.
Building this kind of measurement capability doesn’t happen overnight. But it starts with a single shift: treating measurement as part of program design, not something that happens after the fact. When we do that, we stop reporting on training and start contributing to a conversation about how the organization develops its people. That’s a conversation executives want to have. And it’s one we’re well-placed to lead.
Douglas Robertson is AVP, Business Development at Practica Learning, a Toronto-based corporate learning company specializing in deliberate practice, skills coaching, and measurable behaviour change. With 25 years in financial services and a decade on the learning provider side, he has sat on both sides of the L&D investment table.
References - Part Two:
Baldwin, T.T. & Ford, J.K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 41(1), 63–105. Abstract (Wiley)
Arthur, W., Bennett, W., Edens, P.S. & Bell, S.T. (2003). Effectiveness of training in organizations: A meta-analysis of design and evaluation features. Journal of Applied Psychology, 88(2), 234–245. APA PsycNet
Brinkerhoff, R.O. (2005). The success case method: A strategic evaluation approach to increasing the value and effect of training. Advances in Developing Human Resources, 7(1), 86–101. Free PDF (ResearchGate)
Burke, L.A. & Hutchins, H.M. (2007). Training transfer: An integrative literature review. Human Resource Development Review, 6(3), 263–296. Sage Journals
Chiaburu, D.S. & Marinova, S.V. (2005). What predicts skill transfer? An exploratory study of goal orientation, training self-efficacy, and organizational supports. International Journal of Training and Development, 9(2), 110–123. Wiley
Hawley, J.D. & Barnard, J.K. (2005). Work environment characteristics and implications for training transfer: A case study of the nuclear power industry. Human Resource Development International, 8(1), 65–80.
Hughes, A.M., Zajac, S., Woods, A.L. & Salas, E. (2020). The role of work environment in training sustainment: A meta-analysis. Human Factors, 62(1), 166–183. UIC Repository
Salas, E., Tannenbaum, S.I., Kraiger, K. & Smith-Jentsch, K.A. (2012). The science of training and development in organizations: What matters in practice. Psychological Science in the Public Interest, 13(2), 74–101. Free full text (PDF)
McKinsey & Company (2019). The essential components of a successful L&D strategy. McKinsey.com
Kirkpatrick, D.L. (1959/1996). Evaluating Training Programs: The Four Levels. San Francisco: Berrett-Koehler.
Phillips, J.J. (1997). Handbook of Training Evaluation and Measurement Methods (2nd ed.). Houston: Gulf Publishing.