Practicing in the Age of AI: What Three Years’ Research Actually Tells Us
Over the past three years, practice-based learning has changed quickly. AI tools can now simulate customers, patients, and coaching conversations on demand. At the same time, organizations are still investing in live role-play, simulations, and practice with facilitators or professional actors.
Lately, I keep hearing the same practical question: Does AI-enabled practice work as well as human-led practice — and when does each make sense? Here’s what the research says.
In technical environments, learners who received AI-generated feedback improved more than those coached by experts.
Where AI-enabled practice is genuinely strong
First, it’s worth saying clearly: AI-enabled practice is not just a budget alternative. In the right conditions, it really works.
A 2025 study of 487 university students examined how generative AI tools affected engagement, motivation, and retention. The findings were straightforward: when used intentionally, AI tools were associated with deeper cognitive engagement, stronger emotional investment, and better retention over time.
In technical environments, the findings are even more tangible. In a randomized surgical training study, learners who received AI-generated feedback improved slightly more than those coached by expert instructors. Why? Because the AI delivered highly specific, metric-based feedback consistently — every time, for every learner.
Other research in medical education shows that AI-powered virtual patients and chatbots can improve clinical reasoning and certain communication skills — particularly when learners can practice repeatedly and reflect between attempts.
Taken together, the findings point to a simple pattern. AI-enabled practice tends to shine when:
The skill can be clearly defined and broken into observable steps
Success can easily be measured objectively
Learners would benefit from repetition and immediate feedback
Scale and scheduling make human practice hard to deliver consistently
In other words, AI works very well when clarity and repetition matter most.
Where human-led practice still has the edge
Human-led simulation has decades of research behind it, especially in health professions and communication training.
Systematic reviews consistently show that high-fidelity simulations — often involving trained actors or facilitators — lead to better performance and stronger retention than traditional classroom instruction. Learners don’t just understand more. They perform better in realistic situations.
When studies compare actor-based simulations with automated virtual patients, both improve skills. But learners typically report higher satisfaction and stronger perceived communication gains when practicing with real people.
And when communication skills that are more nuanced, such as emotion, resistance, and ambiguity, human counterparts still tend to produce stronger outcomes. Based on my observations since 2005, I would also add that learning retention and application are enhanced by human-led practice scenarios.
This matters because many of the durable skills organizations care most about today are deeply interpersonal:
Leadership presence
Coaching conversations
Navigating emotion
Building trust under pressure
These skills aren’t about being technically correct. They’re about the impact you have on another person. So where does that leave us?
AI-enabled practice excels when skills are structured, measurable, and repeatable. It offers scale, consistency, and data.
Human-led practice excels when the skill involves nuance, emotion, and confidence in high-stakes conversations.
And increasingly, the strongest designs blend both. Several researchers suggest using AI for foundational skill-building and repetition, then layering in human-led practice for advanced interpersonal work and critical scenarios.
What this means for your learning strategy
If you’re designing practice for procedural or knowledge-heavy skills, AI-enabled tools are often a strong starting point. But, if you’re designing practice for leadership, sales, or client-facing conversations, human-led simulation still carries distinct advantages.
And if the initiative is strategically important — where both precision and presence matter — a blended design is often worth the extra thought.
If you’re curious what this could look like in your organization, the most useful step is usually seeing both approaches side by side.
If it’s helpful, we can offer you a free, immersive demo of both AI-enabled and human-led practice. Experiencing them side by side is more useful than me talking about them in the abstract!
Let's talk: https://calendly.com/dougrobertson/30-minute-zoom
Doug Robertson