Human vs. AI Practice: An Opinion.
Earlier this year, I wrote about what three years of research tells us about AI-enabled practice versus human-led practice. The short version: both work, but for different things. If you missed it, you can read it here.
Since then, we've gone deeper. More papers, more client conversations, and, honestly, more pressure from the market to pick a side. So here's what I think, stated plainly.
AI-enabled practice definitely earns its place in organizations’ learning stack. For knowledge building, reinforcement, repetition, and scale, it's genuinely powerful. But it's the support system, not the foundation.
Blended doesn't mean equal.
Most conversations in L&D right now are pushing organizations toward AI-enabled practices as the primary modality. It's cheaper. It scales. It's easier to deploy across a distributed workforce. All of that is true.
But cheaper and easier to scale is not the same thing as better for your people.
The skills that drive revenue, retain clients, develop leaders, and build teams are overwhelmingly interpersonal. They require nuance, emotional intelligence, and the kind of confidence that only comes from practicing with a real person.
For those skills, human-led practice is not just comparable to AI — it's measurably superior. Deeper engagement. Better transfer. Stronger outcomes.
AI-enabled practice earns its place as a powerful complement, not a replacement. For knowledge building, reinforcement, repetition, and scale, it's genuinely powerful. Interestingly, it’s an especially good fit for some specific purposes: remedial learning for underperformers, soft skills benchmarking, and workforces with high churn rates. But, for the current moment, it's the support system, not the foundation.
One finding that doesn't get talked about enough.
Not all learners respond to AI practice the same way. Research shows that familiarity with AI tools directly moderates the experience. Learners who are already comfortable with AI tend to do well. Those who aren't don't just underperform. They disengage.
There's also a generational dimension worth naming. Younger workers adopt and trust AI tools more readily. Workers over 45 are significantly more likely to approach them with skepticism. We hear that L&D departments carefully consider how to position solutions for this audience. What the research says, though, is that with coaching, training and support, these workers respond much more positively.
And here's something the papers don't explicitly say, but I think is worth noting: a significant number of the studies behind AI's positive results were conducted with university students. Younger, digitally native, comfortable with technology. That's not your typical corporate workforce.
We're not dismissing the findings. But we are reading them with that in mind.
Where we land.
For most organizations, human-led practice should be the core of your learning architecture, with AI enabling the work around it. That's what the research supports. And after twenty-five years in this field, it's what we believe.
The design question isn't which modality to choose. It's knowing which one to use, and when.
If you'd like to see both approaches side by side, we'd be glad to show you. A live demo is usually worth more than anything I can say in a blog post!
Thanks for reading,
Doug
drobertson@practica-learning.com
For more details, check out the following papers first:
1. Automated Virtual Patient Simulation Versus Actor-Based Simulation. JMIR Formative Research, 2025. The study behind the satisfaction scores, the communication skills improvement data, and the cost comparison between AI and human-led delivery. If you're making a business case for either modality, this one has the numbers. https://formative.jmir.org/2025/1/e71667
2. Human-Delivered Conversation Versus AI Chatbot Conversations for Health Education. NIH/PMC, 2025. The source of the four-times-more-words engagement finding. It examines what actually happens to the quality and depth of a learning conversation when you replace a human with an AI — and the gap is larger than most people expect. https://pmc.ncbi.nlm.nih.gov/articles/PMC12538107/
3. Harnessing Generative AI: Engagement, Retention, Reward Sensitivity, and Motivation. Yang, H. — Learning and Motivation, 2025. The most comprehensive single study on what AI-enabled learning actually does to motivation and engagement. This is the research behind the finding that AI has a particularly strong effect on motivation — stronger than any other dimension measured. https://www.sciencedirect.com/science/article/abs/pii/S0023969025000438
4. The Emotional Cost of AI Chatbots: Who Benefits and Who Is Left Behind. ScienceDirect, 2025. This one is worth reading carefully. As mentioned above, it examines how familiarity with AI tools moderates the learning experience — and finds that learners who are less comfortable with AI disengage emotionally. Directly relevant if you're thinking about rollout strategy and which populations you're designing for. https://www.sciencedirect.com/science/article/pii/S2949882125000659
5. Effect of AI Tutoring vs. Expert Instruction on Surgical Skill Acquisition. PMC, 2022. The study that found AI-guided feedback outperformed expert human instruction in a technical training context. It's a specific domain — surgical skills — but the implications for any technical or procedural training are worth considering. https://pmc.ncbi.nlm.nih.gov/articles/PMC8864513/