Measuring Learning Impact: What Do Executives Actually Want to See?

Estimated reading time: 4 mins

I was lucky enough to lead a working session recently with a group of L&D leaders from healthcare, financial services, retail, insurance, and technology.

We spent most of our time on a question that comes up in almost every conversation I have with people in this field: how do we show executives that what we’re doing is actually working?

It’s a question worth taking seriously, because the research behind it is both clarifying and, honestly, a little reassuring. The challenge most L&D teams face isn’t that executives are asking for something unreasonable. It’s that the measurement language L&D has traditionally used and the language executives naturally speak are two different things. Bridging that gap is very much within our reach.

What executives are really asking for

Across industries, the ask is remarkably consistent. Executives want to know whether the organization is performing better because people learned something. In healthcare, that might be a revenue metric tied to a specific clinical behaviour. In financial services, it might be client retention, conversion rates, or a reduction in employee relations incidents. In retail, it might be time-to-proficiency for new hires. The specific metric varies. The underlying question doesn’t: did this investment move something that matters to us?

McKinsey’s research across more than 1,400 executives found that capability building ranked among the top three strategic priorities for most organizations. And yet only 13% of those organizations attempted to quantify the financial return on their learning investments. That gap isn’t evidence that executives don’t value learning. It’s an opportunity for L&D to step into.

The organizations that are having the best conversations with their executives about learning impact have simply built the processes and infrastructure to solve for the constraints.

Where the gap comes from

Arthur, Bennett, Edens and Bell’s landmark 2003 meta-analysis, published in the Journal of Applied Psychology, reviewed 397 training evaluation studies and found that 78% of organizations measured learner reaction, essentially, did people enjoy the experience. Only 9% measured behaviour change. Only 7% measured business results.

That data is from 2002, but the pattern hasn’t changed much since.

This isn’t a failure of effort or intent. Reaction data is genuinely easy to collect: you gather it before people leave the room. Behavioural and results data are harder. They require follow-up 60 to 180 days after training. Arthur and colleagues found that organizations measuring behavioural outcomes did so on average 133 days after training, and those measuring results waited 159 days. They require someone to own the question across organizational boundaries. They require a baseline that most programs never think to establish before they launch. These are real constraints, and they’re worth naming honestly.

But they’re also solvable. The organizations that are having the best conversations with their executives about learning impact have simply built the processes and infrastructure to solve for the constraints. And that infrastructure doesn’t have to be elaborate.

Why the impact often doesn’t show up… and why that’s not the program’s fault

Here’s something the research makes very clear: training works. Arthur and colleagues’ 2003 meta-analysis found consistent effect sizes in the medium-to-large range (d = 0.60 to 0.63) across all four Kirkpatrick levels. The impact is there. The problem, as that same study showed, is that 78% of organizations measured only learner reaction, while just 9% measured behaviour change and 7% measured business results. Training produces results at every level. We simply stop measuring before we look for them.

Robert Brinkerhoff’s 2005 research in Advances in Developing Human Resources found that the typical training program achieves about a 10% success rate, where success is defined as a meaningful, sustained change in job performance. That number can feel discouraging at first. But his explanation for it is the important part: it’s not that the training is poor. It’s whether people actually use what they learned that is determined almost entirely by the context surrounding the training, not by the training itself.

Does the manager reinforce the new behaviours? Does the person have a genuine opportunity to practise? Does the organizational environment reward the change, or quietly make it difficult?

Burke and Hutchins’ 2007 integrative review of 20 years of transfer research, published in Human Resource Development Review, found that peer support is at least as strong a predictor of whether learning transfers to the job as manager support, and in some studies, stronger. In Chiaburu and Marinova’s modelled path analysis (cited in Burke & Hutchins, 2007), peer support was the only variable with a significant direct relationship to skill transfer (B = .65, p < .05). Supervisor support, self-efficacy, and goal orientation all operated indirectly, through their influence on pre-training motivation. Most program designs simply don’t account for this.

Research by Salas, Tannenbaum, Kraiger and Smith-Jentsch, published in Psychological Science in the Public Interest in 2012, puts it plainly: most organizations invest nearly everything in the training event and almost nothing in what comes before it, readiness, climate, manager preparation, or after it: reinforcement, accountability, opportunity to apply. What surrounds the training is more predictive of organizational outcomes than the training itself.

This means that when training doesn’t produce visible results, it’s rarely because the design was wrong. It’s usually because the performance environment wasn’t ready to support it. That’s an important message for L&D teams to be able to take to their executives; not as a defence, but as a roadmap for what to build together.

With executives, honesty is what builds credibility over time.

What it actually takes to answer the question executives are asking

The most effective L&D functions are doing four things that make their impact visible.

They establish a baseline before the program launches. What’s the current proficiency level? What does the relevant business metric look like today? Having that starting point makes the conversation after the program entirely different.

They document specific instances where trained skills produced real results. Not aggregate statistics, but verifiable cases, the manager whose team’s retention improved, the advisor whose conversion rate shifted, the team whose error rate dropped. Those stories, grounded in real data, are what executives engage with.

They translate into the language the business already speaks. Completion rates and NPS scores are meaningful to L&D. Business metrics, like client retention, revenue per producer, time-to-productivity, and incident rates, are meaningful to executives. The translation is ours to make.

And they’re honest about what worked, what didn’t, and what the organization would need to do differently to get more of the first and less of the second. That honesty is what builds credibility over time.


In Part 2: five specific moves that put these ideas into practice, drawn from the research and from what L&D teams who are winning this conversation are actually doing.

Douglas Robertson is AVP, Business Development at Practica Learning, a Toronto-based corporate learning company specializing in deliberate practice, skills coaching, and measurable behaviour change. With 25 years in financial services and a decade on the learning provider side, he has sat on both sides of the L&D investment table.

References - Part One:

Baldwin, T.T. & Ford, J.K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 41(1), 63–105. Abstract (Wiley)

Arthur, W., Bennett, W., Edens, P.S. & Bell, S.T. (2003). Effectiveness of training in organizations: A meta-analysis of design and evaluation features. Journal of Applied Psychology, 88(2), 234–245. APA PsycNet

Brinkerhoff, R.O. (2005). The success case method: A strategic evaluation approach to increasing the value and effect of training. Advances in Developing Human Resources, 7(1), 86–101. Free PDF (ResearchGate)

Burke, L.A. & Hutchins, H.M. (2007). Training transfer: An integrative literature review. Human Resource Development Review, 6(3), 263–296. Sage Journals

Salas, E., Tannenbaum, S.I., Kraiger, K. & Smith-Jentsch, K.A. (2012). The science of training and development in organizations: What matters in practice. Psychological Science in the Public Interest, 13(2), 74–101. Free full text (PDF)

McKinsey & Company (2015). Do your training efforts drive performance? McKinsey.com

Next
Next

Human vs. AI Practice: An Opinion.