

www.litmos.com/lp/live-demo
![]()


www.litmos.com/lp/live-demo

EMPLOYEES AND ORGANIZATIONS WILL ADAPT, AS THEY ALWAYS DO. BUT ADAPTING WELL REQUIRES L&D TO LEAD THE WAY.
Artificial intelligence (AI) is everywhere right now. It’s a buzzword. A catchphrase. A label slapped onto just about every product demo and conference session. And while AI is certainly changing how we work and learn, the nonstop headlines and marketing noise have made it harder — not easier — for learning leaders to figure out what truly matters. For many, the challenge isn’t whether to use AI, but how to separate real capability from clever branding.
This issue of Training Industry Magazine moves beyond the hype surrounding AI to explore how it’s actually being applied in training. Our goal isn’t to predict the future or chase the buzz. It’s to take a closer look at how AI is being used right now in training and the tangible impact it’s having on the work of learning and development (L&D) teams. In these pages, you’ll explore how the facilitator’s role is evolving, what it takes to build an AI course for beginners, and how learning can break through when the modern brain is distracted, overloaded and stretched thin.
Because let’s be honest: Learners are tired. They’re navigating constant change, competing priorities and the expectation to do more with fewer resources. AI can help with many of these challenges, but only when organizations are prepared to use it thoughtfully and responsibly. Workforce readiness is a critical (and often overlooked) part of AI adoption.
Without clear guidance and training, employees will find their own tools and resources, and those may not align with organizational goals, policies or values.
Technology has always reshaped the way we work, and AI is not the first tool to shake things up. Employees and organizations will adapt, as they always do. But adapting well requires L&D to lead the way, helping teams build skills, ask better questions and use new tools with intention.
We hope this edition sparks meaningful conversations among your team. Conversations that will help clarify next steps in your AI adoption journey and encourage thoughtful exploration of what’s possible. While technology is powerful and incredibly useful, it isn’t the solution to every problem. We must remember that lasting impact comes from applying the right tool to the right business challenge.
As you read, we encourage you to reflect on what you and your teams need most right now, and let us know how Training Industry can support you along the way.
Michelle Eggleston Schwartz, CPTM, is the editor in chief of Training Industry, Inc., and co-host of “The Business of Learning,” the Training Industry podcast. Email Michelle.


A TRAINING MANAGER’S FRAMEWORK TO EVOLVING YOUR LEARNING ENABLEMENT STRATEGY
By Somya Dwivedi-Burks, Ph.D., CPTM
Three pain points slowing AI adoption and a path to organization-wide competency.
DROWNING IN DATA, STARVING FOR INSIGHT
By Christopher Massaro, CPTM
Learn how AI will reshape measurement and prompts you can use for immediate insights.
BUILDING AN AI BEGINNER’S COURSE: TEACHING AI LITERACY IN 4 STEPS
By Jason Fox
Four lessons that create a foundation for confident, responsible AI use.
THE FACILITATOR’S ROLE IN MODERN LEARNING ENVIRONMENTS
By Cindy Huggett, CPTD
AI avatars are highly capable, but human facilitation is irreplaceable in these key areas.
ADVANCING THE AI MATURITY MODEL FOR LEARNING DESIGN TEAMS
By Matt Donovan and Geoff Bloom
Know where your team is and where it needs to go to get more value from AI.
DESIGNING LEARNING FOR THE DISTRACTED BRAIN
By Dr. Grace Chang and Dr. Karina Freitag AI can either contribute to cognitive burden or reduce it. These prompts help filter out distractions.
WHAT AI MEANS FOR CORPORATE L&D IN 2026
By Tom Whelan, Ph.D.
FROM THE EDITOR
By Michelle Eggleston Schwartz, CPTM
By JD Dillon 3 9 11 13 15 59 61 63
LEARNING LEADER SPOTLIGHT
By Mike Saunderson
L&D CAREERS
By Amy DuVernet, Ph.D., CPTM
SCIENCE OF LEARNING
By Srini Pillay, M.D.
CAREER DEVELOPMENT
By Julie Winkle Giulioni
CULTURE AND WELL-BEING
By Dr. Kristal Walker, CPTM
BUILDING LEADERS
By Marshall Goldsmith and Suzie Bishop
WHAT’S NEXT IN TECH?
17 20 53 56
UPSKILLING
By Melissa Swift
Build foundational tech skills.
HOW-TO
By Liza Wisner
Future-proof your workforce.
STRATEGIES
By Lacy Thompson and Jamie McAvoy
Tell stories that resonate.
PERSPECTIVES
By Dr. Melissa L. Brown
Support the AI learning that’s already happening.


CASEBOOK By John Batts
Learn how Avantor uses AI as a thinking partner.
CLOSING DEALS
By Sarah Gallo, CPTM
Learning Pool’s acquisitions support vision for end-to-end learning ecosystem.
COMPANY NEWS
Review the latest training news from the last quarter.
CEO
Ken Taylor ktaylor@trainingindustry.com
EDITOR IN CHIEF
Michelle Eggleston Schwartz meggleston@trainingindustry.com
SENIOR EDITOR
Sarah Gallo sgallo@trainingindustry.com
SENIOR EDITOR
Carla Rudder crudder@trainingindustry.com
CREATIVE DIRECTOR
Amanda Longo alongo@trainingindustry.com
SENIOR DESIGNER
Mary Lewis mlewis@trainingindustry.com
DESIGNER
Kellie Blackburn kblackburn@trainingindustry.com
DESIGNER
Cassandra Ortiz cortiz@trainingindustry.com
JUDI BADER, CPTM Senior Director of Culture, Learning and Development Willy’s Mexicana Grill
BARBARA JORDAN, CPTM Group Vice President, Global Learning & Development Sims Metal Management
CATHERINE KELLY, MA, BSN, RN, CPTM Director of Learning Programs Brookdale Senior Living
SHIREEN LACKEY, CPTM Senior Management and Program Analyst, Office of Business Process Integration Veterans Benefits Administration
SCOTT NUTTER Principal/Owner Touch & Go Solutions
MATTHEW S. PRAGER, CPTM Executive Training Manager U.S. Government
DESIGNER
Rylee Hartsell rhartsell@trainingindustry.com
DESIGNER
Sha’Meire Jackson sjackson@trainingindustry.com
ADVERTISING SALES sales@trainingindustry.com
MISSION
Training Industry Magazine connects learning and development professionals with the resources and solutions needed to more effectively manage the business of learning.
MARC RAMOS
Global Head, Learning Strategy and Innovation ServiceNow
KELLY RIDER Chief Learning Officer PTC
DR. SYDNEY SAVION Vice Chancellor for People, Culture & Belonging Vanderbilt University
KERRY TROESTER, CPTM Director, North America Sales Training Lenovo
NATASHA MILLER WILLIAMS Head of Diversity & Inclusion Ferrara
KEE MENG YEO Adjunct Professor Grand Valley State University & Davenport University
SUBSCRIPTIONS
ELECTRONIC: Sign up at TrainingIndustry.com to receive notification of each new digital issue.
PRINT:
Print copies are available for purchase at magcloud.com for $17.95.
To order reprints of articles, please contact Training Industry at editor@trainingindustry.com.
PUBLISHER
Training Industry Magazine is published quarterly by:
Training Industry, Inc. 110 Horizon Drive, Suite 110 Raleigh, NC 27615-6520


Not all learning leaders have the same development needs.
That’s why Training Industry Courses created personalized Learning Journeys.
From topics including evaluating performance, identifying training needs and managing learning technologies, there is a Learning Journey that will help you fill your unique skills gaps to become a more confident learning leader.
MIKE SAUNDERSON
In this issue, we are pleased to spotlight Mike Saunderson, Ph.D., owner and director of Ethnopraxis, Inc. With over 14 years of experience supporting Fortune 500 organizations in training needs assessment, instructional design and evaluation, Mike’s work has earned recognition for driving impact through ROI-focused, evidence-based learning solutions.
Read on to learn about Mike’s career journey.
Q: HOW DID YOU GET STARTED IN LEARNING AND DEVELOPMENT?
A: I transitioned from social work to instructional design, combining my background in behavior change with graduate training in learning design and technology. My early projects in designing compliance and onboarding programs confirmed my passion for aligning learning with measurable performance outcomes.
Q: WHAT’S YOUR MOST MEMORABLE TRAINING EXPERIENCE, GOOD OR BAD?
A: Redesigning onboarding for a Fortune 500 health care company is a standout example. By integrating structured leadership check-ins and coaching, we reduced turnover by 25%. This experience reinforced my conviction that learning should be tied to organizational performance rather than just course completion.
Q: WHO WOULD YOU CONSIDER YOUR MOST VALUABLE ROLE MODEL? WHAT WERE SOME QUALITIES THAT MADE THEM GREAT?
A: My doctoral advisor modeled rigor, humility and persistence. His ability to balance scholarship with practical

MY EARLY PROJECTS IN DESIGNING COMPLIANCE AND ONBOARDING PROGRAMS CONFIRMED MY PASSION FOR ALIGNING LEARNING WITH MEASURABLE PERFORMANCE OUTCOMES.
application inspired me to connect research with real-world outcomes in every project I undertook.
Q: WHAT ARE THE MOST PRESSING ISSUES ON YOUR PROFESSIONAL PLATE RIGHT NOW?
A: Helping organizations move beyond tracking completions to measuring accurate transfer and ROI. My focus is on developing evaluation systems and technologies that demonstrate how learning translates into sustained performance improvement and business value.
Q: WHAT’S THE MOST CHALLENGING ASPECT OF YOUR JOB?
A: Convincing stakeholders to invest in rigorous evaluation when they are accustomed to surface-level metrics like satisfaction or attendance. Changing this mindset requires both data and consistent communication.
Q: WHAT’S THE MOST REWARDING ASPECT OF YOUR JOB?
A: Seeing measurable improvements — reduced injuries, fewer patient complaints or higher retention — and knowing that learning solutions directly improved people’s work and organizational results.
Q: WHAT’S YOUR PREFERRED TRAINING METHODOLOGY?
A: I employ a blended approach, grounded in ADDIE, rapid prototyping and supported by adult learning theory.
My preferred designs combine scenariobased learning, coaching and evaluation loops to ensure transfer and sustainability.
Q: HOW DO YOU FIND THE TIME TO CONTINUE YOUR OWN PROFESSIONAL DEVELOPMENT?
A: I integrate professional development into my work by conducting research, presenting at conferences such as ISPI, ATD and AECT, publishing articles and staying active in professional communities. Learning is both a responsibility and a passion.
Q: ANY RECOMMENDATIONS FOR FOLKS OUT THERE: BOOKS, PARTNERS, RESOURCES, ETC.?
A: I recommend Baldwin and Ford on training transfer, Thalheimer’s Learning Transfer Evaluation Model (LTEM) and Phillips’ ROI methodology. Engage with ISPI and ATD for research-based insights and L&D publications for practical strategies.
Q: “IF SOMEONE WANTS TO FOLLOW IN MY PROFESSIONAL FOOTSTEPS, I’D TELL THEM TO BE SURE TO …”
A: Diagnose before you design. Align learning with KPIs, rigorously measure outcomes and communicate results in business language. Stay curious, publish and engage with the professional community to bridge the gap between research and practice.


AMY DUVERNET, PH.D., CPTM

Artificial intelligence (AI) is already influencing how learning and development (L&D) work gets done and how L&D professionals are evaluated for performance and growth. Rather than transforming roles overnight, AI is reshaping expectations around speed, decision-making and value. These shifts are showing up in measurable outcomes, including performance, compensation and career progression across the field.
Training Industry’s recent L&D Career and Salary Report makes this clear. More than half of learning professionals report using AI across core responsibilities, particularly in developing and delivering solutions, managing learning technology and optimizing processes. And it pays to do so; L&D professionals who use AI report stronger performance and higher median salaries.
So, what, specifically, is changing?
AI is steadily absorbing routine coordination, drafting and process related work. Tasks that once consumed significant L&D time (e.g., initial content drafts, analytics and reporting) now take a fraction of the time with AI support. But AI is not just changing how work gets done. It is changing what L&D work matters most and who is expected to take responsibility for it.
As a result, some types of work are losing value while others are becoming more critical. As AI becomes integrated into workflows across organizations, L&D must let go of:
• Assuming L&D controls when, where and how learning happens.
• Designing learning around fixed requirements and formal refresh cycles.
• Assuming learning happens outside of work processes.
• Treating AI as a tool to deploy rather than a system to shape.
At the same time, L&D must lean into:
• Designing conditions that make AIsupported learning effective.
• Influencing how AI explains, reinforces and sequences knowledge.
• Setting boundaries that ensure AI supports learning rather than shortcuts it.
• Detecting skill signals when learning is embedded in work.
AI gives employees more direct access to information and is changing how work happens more broadly. For example, research suggests that AI adoption can reshape collaboration and knowledgesharing patterns. L&D professionals are well positioned to guide these changes.
Recognizing these shifts requires reconfiguring L&D roles rather than eliminating them. Some traditional roles, particularly course-based content development roles, are likely to shrink. At the same time, new work is emerging that requires stronger judgment, systems thinking and learning expertise. Because this work affects performance, trust and team processes, it is increasingly becoming leadership work.
One key finding from the L&D Career and Salary Report reinforces this point. While AI use improves performance in many areas, it does not appear to improve strategic alignment. That remains a distinctly human responsibility — and a leadership opportunity for L&D professionals.
The takeaway for L&D professionals is not simply to learn AI tools, but to apply them strategically. Start by selecting one active L&D initiative and intentionally using AI across the full lifecycle, from needs analysis through design, delivery and evaluation. Pay close attention to where AI improves speed or quality and where it falls short to clarify where human judgment adds the most value.
L&D PROFESSIONALS WHO USE AI REPORT STRONGER PERFORMANCE AND HIGHER MEDIAN SALARIES.
Next, translate your AI use into performance language. Instead of discussing tools, describe outcomes such as faster turnaround times, clearer insights for stakeholders or improved decision-making. This is especially important given the link between AI use, stronger performance ratings and higher reported salaries.
Finally, look beyond your own work. Share practical lessons learned, help teammates adopt tools responsibly and contribute to conversations about how AI should support learning and performance. As organizations scale AI adoption, L&D professionals who help others navigate these shifts are increasingly positioned as leaders, regardless of title.
Amy DuVernet, Ph.D., CPTM, is the vice president of learning products at Training Industry, Inc., where she oversees all processes related to Training Industry’s courses for training professionals, including program development and evaluation. Email Amy.

Companies using ELB Learning’s immersive solutions have saved $588K, reclaimed 1,724 hours, and achieved up to 400% ROI!


Build your own games and VR, or partner with us to create something custom.

























SRINI PILLAY, M.D.

Artificial Intelligence (AI) has enabled tremendous efficiency gains in learning. It can save time and reduce pressure on the brain to keep up with the demands of life and productivity. It also enables deeper learning through AI roleplay, thereby deepening the learning process and bringing theoretical ideas to life. While there are undoubtedly innumerable advantages of AI in learning and development (L&D), there are also less well-known impacts on the learning brain.
In general, learning involves processes such as memory, fluency, refinement and sense-making. However, many of these functions can now be provided by a search engine and a large language model (LLM). While this may seem like a relief, a recent paper highlighted how AI can make the learning process more mechanical and operational, thereby compromising the joy of learning.
The brain is highly dependent on effortful learning to develop. Without the joy of effort and exploration, critical thinking and creativity are at risk. Furthermore, by providing generic solutions, AI compromises unique, original and diverse approaches that brain-based learning affords. One study demonstrated that generative AI enhances individual creativity but reduces the collective diversity of novel content. There is a concern that this will lead to an erosion of our cognitive skills. Some researchers refer to this as AICICA (AI-chatbot induced cognitive atrophy).
Without active learning, brain cells will die. New neurons are kept alive
by effortful learning. This is part of a well-known “use it or lose it” principle. Effortful learning is possible, but not automatic, with generative AI.
The same group of researchers who defined ACICA noted that emotional engagement with chatbots may increase cognitive reliance, thereby compromising users’ critical thinking. They also point out that the dynamic nature of conversations, while creating a sense of immediacy in response, may also lead to offloading a multitude of cognitive tasks to chatbots. This may exacerbate multitasking, shallow thinking and reduce the benefits of waiting.
Interacting with a chatbot may imitate human interactions, but the small differences really matter. Many chatbots prefer to be sycophantic rather than argumentative and rational rather than emotional. While this might seem like an advantage, renowned neurologist Antonio Damasio explains in his book, “Descartes’ Error: Emotion, Reason, and the Human Brain,” that “I think, therefore I am” is too simple because human cognition depends heavily on emotion. He points out that emotions are crucial for rational thinking and social interaction. And they “bring the body into the loop of reason.” Human learning is not just a brain-based process
Most people will likely not automatically care about this cognitive erosion or AI addiction as long as their work gets done. It’s important that leaders guide people to develop a relationship with AI that makes it less of a crutch and more of a speed train for the human brain.
To get started, there are a few things you can try:
• Rather than asking AI chatbots to generate ideas, generate your own and ask the chatbot to refine them.
• Use the recall of information you have already learned to feed into LLMs, asking them to generate ideas around your ideas.
• Ensure that you deeply understand the difference between real emotions and your over-reliance on LLMs for emotional support.
• Check in with yourself about your gut feelings so that you do not lose your sense of intuitive learning.
• Every time an LLM gives you an answer, ask, “How can I make this better?”
These simple steps are a reasonable way to start protecting your own brain from atrophying, and they are important to implement within your organization when adopting new AI methodologies.
Dr. Srini Pillay is the CEO of NeuroBusiness Group. He is a Harvard-trained psychiatrist and neuroscientist, on the Consortium for Learning Innovation at McKinsey & Company, and author of “Tinker Dabble Doodle Try.” Email Srini.



Are you finding that employees in your organization are just holding on these days? I don’t mean “holding on by a thread,” although that’s the case for many workers. I mean holding unusually tightly to their jobs. Amid layoffs, economic uncertainty and automation anxiety, “job hugging” — clutching our current roles not because we love them, but because we’re afraid of losing them — continues to be on the rise.
Some leaders quietly celebrate this shift. After years of talent shortages and worker mobility, the power pendulum seems to have swung back to management’s side. But when the economy rebounds, as it inevitably will, those who’ve felt trapped will likely flee, triggering a Great(er) Resignation 2.0.
When endured over long periods of time, job hugging can understandably morph into resenteeism: a term coined by RotaCloud to describe a toxic blend of resentment, absenteeism and presenteeism.
And this is what too many employees are experiencing today. You see it as people who show up physically but check out mentally. They do what’s required and nothing more. They suppress curiosity, suspend innovation, avoid risk and reject growth. The result is usually fatigue, frustration and disengagement.
But that’s where learning and development comes in. In this moment, when fear constrains performance and possibilities,
learning leaders have a unique opportunity — and perhaps responsibility — to help their organizations hug back with growth.
While global economic and employment data explain scarcer and slower hiring in many sectors, affected employees can’t help but take it personally. Many report feeling less marketable, less desirable and stuck.
When organizations offer portable upskilling and reskilling (development that can travel with people), they replace that lack of confidence with a sense of skill security. They send a powerful signal: We care about your long-term growth, not just today’s output. And that can feel like a hug back.
Ironically, that freedom to take one’s skills elsewhere actually fosters loyalty that job hugging can extinguish. When employees know they can leave but choose to stay, engagement deepens. Because real loyalty, after all, is born not from scarcity or fear but from choice and trust.
Sustained periods of job hugging can take a significant emotional toll — a toll that growth can help to reverse. Consider the work of organizational psychologist, Nick Petrie, whose research identified that the average worker spends 61% of their time performing (doing what they know) and just 39% growing (learning something new.)
Further, he found that this imbalance is draining. Constantly doing more with less erodes energy and engagement whereas growth, on the other hand, is regenerative. Learning something new. Experimenting. Stretching beyond the familiar and known.
These actions don’t just build skill; they offer hope.
You can help your organization hug back by:
• Advocating for intentional grow time within the workweek and establishing leadership and cultural norms that protect it.
• Facilitating microlearning, skill exchanges and short-term stretch assignments that reawaken curiosity.
• Encouraging purpose-aligned microlearning projects that connect employee passions to organizational impact.
• Educating managers, helping them see learning not as a distraction from performance but as a strategy to build long-term capacity and retention.
Organizations that take these steps toward embracing all employees with growth will see employees stop holding on out of fear and start showing up out of choice.
Learning professionals are, by nature, the huggers of the workplace. Right now, your hug on behalf of the organization is the meaningful learning and growth that you offer. It serves as an emotional safety net, holding people steady through uncertainty and reminding them they are valued.
And when the employment pendulum swings back, as it always does, employees will remember how they felt — capable, valued and invested in — and they’ll keep hugging. This time out of choice, not fear.
Julie Winkle Giulioni is the author of the bestselling books, “Promotions Are SO Yesterday” and “Help Them Grow or Watch Them Go.” Email Julie.
My career goals include...
• Obtaining a promotion or moving into a new role.
• Building credibility with others in my company and/or in the L&D field.
• Increasing my earning potential.
• Gaining new skills and continued education.

Complete the quiz to find out if the CPTM program will help you meet these professional goals and more!
TAKE THE QUIZ
Learn More About the CPTM Program
BY MELISSA SWIFT
Think about how you learned to drive a car and think about how you learned to use a smartphone. Completely different, right?
We teach people how to drive cars and require them to display that knowledge before we hand over a license. Smartphones, on the other hand, are a fully DIY exercise. Nobody walked you through the overall functionality or any of the apps. You just picked it up and used it.
Integrating new workplace technology today is a funny mix of the “car”
WE NEED TO HELP PEOPLE LEARN HOW TO BE GOOD WITH TECHNOLOGY — ANY TECHNOLOGY.
model and the “smartphone” model. But at today’s pace, this combination of approaches isn’t working. The number of technologies the average worker touches in a day has increased dramatically in recent years Organizations are now challenged to train on more technologies, faster.
Moreover, sophisticated emerging technologies are no longer neatly categorizable. Is generative artificial intelligence (AI) a “car” — complex and a bit dangerous? Or is it a “smartphone” — easy peasy intuitive? It doesn’t help that the underlying technology itself changes week to week, a seminal issue of the cloud era in general. The costly, time consuming and often confusing journeys organizations go through to try to train employees how to use generative AI won’t be the last such scramble.
Scrambles are unsustainable. It’s time to completely reset our approach to technology training.
What would a good reset look like? This is an issue that I’ve been working on for the better part of a decade, helping large, complex organizations take their people on the digital transformation journey. I’ve also explored the issue of how we can best use technology at work across research and interviews for two books. For my upcoming book “Effective: How to Do Great Work in a Fast-Changing World,” I pressed some top-notch chief information officers
and other C-suite figures on a very specific question: What does it mean to be good with technology at work?
This research combined with years of observation and experimentation on “what works in the wild” led me to a surprising conclusion: We have to go back to the basics. It’s something that in more than 50 years of the Information Age, we’ve never done.
We need to help people learn how to be good with technology — ANY technology.
What organizations currently lack, and what the most technologically adept workplaces possess on an informal or even accidental basis, is a reliable methodology for keeping folks digitally fluent. Concretely, this translates to a backbone of learning about the core skills of technological adeptness.
A common skill set helps all employees, in all roles, succeed at working fluidly with an array of technologies. Moreover, establishing a common language and toolkit around core technological skills speeds the development of all subsequent tactical tech trainings around specific technologies, removing uptake time, cost and content creation pressures.
Foundational training about technology may sound amorphous, but it actually points to a very specific set of skills and should be governed by a set of execution principles fairly different from how tech training operates today.
Ask technologically adept folks what it means to be “good with technology” and observe how technological capability plays out in practice, and you will hear and see a pretty consistent set of skills that training can then be built off of. These skills can be summarized as:
• Data, logic and the fundamentals of code. Do we all need to learn how to code? Absolutely not, especially in an era where coding is more and more automated. But understanding the very basics of how code is built, where information comes from and what software does with it dramatically enhances our ability to work with technology. A little understanding goes a long way. It’s fascinating to see how executives who had two hours of training on Visual Basic 20 years ago are more astute users of generative AI today. Know a little bit about how all software works, and you can use any software better.
• Experimentation and debugging. Most of today’s tech training walks
users through a linear path of the technology working perfectly. This is a mistake. Really adept users of technology can both play around with the system to ensure they’re getting full value from their use, and they can fix it (to some degree) when things go wrong. Experimentation and debugging aren’t loosey-goosey processes; there are overarching strategies to each that can be applied over and over.
• Understanding technology’s possibilities and limitations (including how technologies work together). Much of what reads as reticence in technology adoption is in fact users’ natural reactions to not knowing either the potential or the limitations of technology. And oftentimes, problems erupt because users are genuinely not conscious about how technologies do or don’t work together. (“Wait, those systems don’t talk to each other?”) These are issues that can be addressed headon by giving technology a real “job description,” as you would for any human on your team.
• Safety and security. New cyber scams erupt every day, and new technologies surface new safety issues constantly. But once again — there’s a concrete backbone of ideas about everything from protecting identifying information to ensuring information quality to basic well-being in a technology-rich environment that can, and should, be taught.
To get employees more adept with technology at a core level, form must follow function. Training should mirror how tech is utilized in modern organizations. This training should be:
• “Sandbox” based. The training should allow users to test and learn with technology in a protected environment.
• Always-on and iterative. The training should be framed as an accessible resource rather than a one-time event.
• Engaging and fun. People who genuinely work well with technology
All too often, technology training doesn’t receive the surrounding support it needs to be successful. For a foundational training on technology skills to truly resonate, true to classical adult learning theory, reinforcement with peers and on-the-job learning is critical.
Tactics to do so might include:
Empowerment and rewards for tech superusers who coach others (a phenomenon that happens on an ad hoc basis, but is rarely acknowledged).
Formal recognition of what technologies and systems will support any given initiative, plus space for discussion specific to tech progress on that initiative.
Tech “office hours” where employees can help each other with recurring questions and issues.
Opportunities for on-the-job certification around broader tech skills (like experimentation, debugging or working across diverse systems).
Use of AI agents trained on an organization’s particular tech stack to prompt savvier technology use in the flow of work.
today are those who enjoy it. This is a sentiment that can be driven in a broader population by treating tech-based activities more as play, less as drudgery.
• Contextualized, in the flow of work, role-and-level appropriate and milestone-driven. The better connected fundamental understanding of technology is to an employee’s current work tasks and career stages, the better it will resonate. This is particularly true at higher leadership levels, where organizations often do a poor job of articulating the (actually very relevant) role of technology in leaders’ jobs.
• Structurally aligned to subsequent tactical training. For example, every training on a new technology should include a section on debugging specific to that technology.
It may feel strange to go “back to basics” and talk about things like the basic principles of software or data.
TO GET EMPLOYEES MORE ADEPT WITH TECHNOLOGY AT A CORE LEVEL, FORM MUST FOLLOW FUNCTION.
But giving employees this foundational understanding truly accelerates the velocity of every technology-specific effort — and helps ease the stress of a working world where tech plays a greater and greater role.
Fix this missing link, and people and technology will work together elegantly.
Melissa Swift is the founder and CEO of Anthrome Insight, a company debugging modern work for greater effectiveness. She is the author of the forthcoming book, “Effective: How to Do Great Work in a Fast-Changing World” (Wiley, 2026) as well as “Work Here Now: Think Like a Human and Build a Powerhouse Workplace” (Wiley, 2023). Email Melissa.
BY LIZA MUCHERU WISNER
Generative artificial intelligence (AI) has unlocked a new era of possibility and pressure for leaders navigating the messy middle of talent development. Today’s workforce expects personalized, just-in-time development. Static portals and checkbox modules won’t cut it, and organizations that are still treating talent as a support function are already behind.
As Deloitte’s 2025 Human Capital Trends report notes, leaders must deliver “adaptive, tech-enabled, people-first strategies.” That’s not a tagline — it’s a survival strategy. Leading in the age of AI is like chess. The best companies don’t just react, they anticipate.
Here are five moves you can make to invest in talent now and how AI can assist you along the way.
If your leaders can’t name the top five skill gaps between today’s capabilities and tomorrow’s strategy, you’re not managing talent, you’re gambling with it. AI thrives on structured data, but culture lives in nuance. Bridging both requires a living, skills-based map of your workforce.
According to Deloitte, organizations that adopt clear, skills-based taxonomies are 57% more likely to fill critical roles internally. A structured, AI-enabled “skills census” gives you the visibility you need to act with precision.
Action: Launch a 90-day skills intelligence sprint to transform static org charts into a
living skills ecosystem — one executives can use to model workforce readiness, mobility and future ROI.
• Phase 1: Diagnose (Weeks 1–3)
- Deploy AI-powered, role-based assessments and social listening tools to uncover real-time skills data across departments.
• Phase 2: Validate (Weeks 4–6)
- Integrate manager calibration sessions to refine data accuracy and identify high-value adjacencies — skills that can transfer across roles.
• Phase 3: Visualize (Weeks 7–9)Build a live skills graph dashboard that leaders can filter by business priority (e.g., automation readiness, customer experience, sustainability).
• Phase 4: Act (Week 10+) - Tie learning investments and workforce planning directly to your skills graph insights.
As AI becomes increasingly integrated into the workplace, it’s essential to recognize that different generations engage with learning and development (L&D) in varied ways. Tailoring learning experiences to these preferences can enhance engagement and effectiveness.
Research suggests that 74% of Gen Zs and 77% of millennials believe generative AI will impact the way they work within the next year. Are your learning strategies designed to meet the needs of a truly multigenerational workforce?
Action: Architect a learning portfolio strategy that adapts to your workforce like a financial portfolio does to markets. This approach reframes learning as a personalized growth portfolio where every employee’s ROI compounds over time.
• Start small, scale intelligently: Pilot three learning modalities mapped to generational behavior data: microlearning for Gen Z, blended leadership labs for Gen X and millennials, and peer-led sessions for executives and Baby Boomers.
• Embed AI learning journeys: Use tools like EdCast or OpenSesame to recommend personalized content in the flow of work, integrating Slack nudges or adaptive playlists.
• Gamify for inclusion: Introduce intergenerational mentorship pairings and AI-enabled recognition (badges, progress metrics) to drive engagement.
Leaders are often bogged down by routine tasks with little time for strategy. The advent of generative AI offers a solution by automating these low-value activities, thereby freeing up leadership bandwidth for more impactful work.
A study by Brynjolfsson, Li and Raymond found that implementing a generative AI-based conversational assistant led to a 15% increase in worker productivity, measured by issues resolved per hour. Notably, less experienced workers
benefited the most, improving both speed and quality of output.
Ask your team which talent management tasks, such as compliance tracking, content curation or scheduling, are consuming disproportionate amounts of leadership time.
Action: Pilot an “AI Learning Concierge” that scales personalization. This digital talent coach can automate the 80% of learning logistics that don’t require human judgment, freeing leaders to focus on mentorship and innovation.
• Phase 1: Prototype (30 days) - Use a low-code tool (e.g., Microsoft Copilot Studio, Moveworks or Workday Skills Cloud) to build an AI assistant that recommends courses, mentors and internal experts based on job roles.
• Phase 2: Automate (60 days)Connect your learning experience platform (LXP) and human resources information system (HRIS) with Slack and Teams to deliver just-in-time learning moments, micro-videos, checklists or templates when triggered by project deadlines or new responsibilities.
• Phase 3: Humanize (90 days)Layer on personalized prompts for managers (e.g. “Ask Alex about her AI certification progress this week”) to drive coaching behavior.
Career development doesn’t occur through content alone, it thrives in moments of connection, reflection and trust between employees and their managers.
A study by Right Management highlights that organizations promoting ongoing career discussions see enhanced employee engagement and improved retention rates. AI can and should help managers coach with confidence. When learning data becomes a tool for real conversations, check-ins turn into growth moments.
Action: Operationalize growth intelligence through real-time coaching dashboards.
This shift transforms feedback from a backward-looking ritual into a forwardlooking dialogue, and data becomes a relationship builder, not a report.
• Design the pulse: Pull data from your LXP, engagement tools and performance reviews into a single view for every manager, showing learning hours, skill progression and engagement sentiment.
• Empower the coach: Equip managers conversation starters to foster psychological safety and reflection.
• Train the trainer: Create manager enablement that upskills leaders in coaching micro-skills using real scenarios.
WHEN LEARNING DATA BECOMES A TOOL FOR REAL CONVERSATIONS, CHECK-INS TURN INTO GROWTH MOMENTS.
and retain top talent. High-performing organizations are reallocating learning budgets toward strategic capabilities that align with long-term value creation.
Learning investments must go beyond mandatory training to develop the skills that drive growth, innovation and agility.
Action: Reframe compliance from an obligation to an opportunity, transforming it into a brand differentiator for trust, culture and agility.
• Phase 1: Audit - Map every compliance topic (safety, ethics, data) to its business impact and identify overlaps with strategic skills (e.g., cybersecurity and AI ethics).
• Phase 2: Convert - Reimagine mandatory training as interactive, scenario-based simulations that develop both compliance understanding and leadership judgment.
• Phase 3: Amplify - Redirect 10-15% of compliance hours toward “capability sprints,” short, AI-curated learning bursts focused on digital fluency, customer empathy or adaptive leadership.
The rules have changed. AI is reshaping how we grow, retain and lead talent, but it hasn’t replaced the human strategy behind it. These aren’t quick fixes. They’re intentional moves for those playing the long game.
For years, talent development focused on compliance, keeping employees aligned with legal and regulatory standards. But amid constant change, that’s not enough. What drives businesses forward now is capability: building the skills that fuel innovation, agility and customer impact.
Research from McKinsey and other global talent studies show that companies investing in future-focused skill development, especially during times of transformation, are significantly more likely to hit performance targets
In this market, reacting isn’t enough. You need to move with clarity, act before the gap becomes unbridgeable and lead like you’re always five moves ahead. Because in the age of AI, hesitation is a risk your competitors are counting on you to take.
Liza Mucheru Wisner is a future of work strategist and AI safety advocate recognized globally for her expertise in talent development, intelligent automation and workplace culture. A multiple-time SHRM Excellence Award winner, she helps organizations blend human insight with innovation to build inclusive, high-impact learning strategies. Email Liza.
By Somya Dwivedi-Burks, Ph.D., CPTM

Take AI enablement from messy, fragmented experimentation to “magic” — a unified, roledriven capability that strengthens governance, accelerates adoption and positions L&D as a strategic engine for business impact.
The rise of generative artificial intelligence (AI) for work necessitates organizational adaptation, placing learning and development (L&D) leaders, particularly training managers, at the forefront of capability building. While executives envision an organization powered by unified intelligence, the practical execution is often marked by reactive, bottom-up attempts at enablement.
The “State of AI in 2025” report from McKinsey & Company suggests the honeymoon phase of generative AI is over. To move forward, leaders must shift their focus from simply providing “access” to tools to intentionally building the right architecture. The real transformation isn’t about humans using tools; it’s about humans supervising agents. This shift requires organizational redesign, not just new software licenses.
Despite the C-suite’s ambitions for unified intelligence, organizations often rely on fragmented, reactive approaches to AI enablement. This results in stalled pilot projects and a lack of strategic cohesion. Common barriers include workforce unpreparedness, weak governance, misreading employee attitudes and an inability to build confidence or close the digital skills gap. These factors prevent organizations from moving beyond initial experimentation to enterprisewide transformation.
A key driver of success is managers who support employee adoption of new systems and technologies. To do this effectively, they must take their organizations on a strategic journey from messy to magic.
• Messy: The current operational reality characterized by systemic inefficiencies, organizational friction and critical miscommunication stemming from an absence of centralized governance over AI enablement efforts.
• Magic: The realization of unified organizational intelligence through continuous, incremental improvement while managing uncertainty and building resilience.
Ultimately, this article is our call to action as L&D managers. We are the organization’s capacity engine;
and to lead the AI mandate, the L&D function must first undergo its own digital transformation, moving away from outdated, reactive methods toward strategic, technology-enabled capability building.
This article proposes a structured framework for training managers to transform this chaotic, fragmented state into a source of organizational advantage, positioning L&D to drive strategic business outcomes rather than merely reacting to technological evolution.
The core challenge for training managers is not merely technological adoption but the systemic dysfunction that arises when revolutionary capabilities are introduced without modernizing the underlying learning infrastructure.
Absent clear executive sponsorship and a centralized L&D framework, AI enablement often defaults to reactive, ad-hoc efforts. Individual business units, motivated by potential efficiency gains, launch independent training initiatives. This well-intentioned autonomy results in quantifiable drawbacks:
• Duplication of effort: Multiple teams allocate resources to researching, vetting and acquiring similar training content or tools, leading to unnecessary budget redundancy.
• Varying standards : The quality, ethical governance and security protocols associated with AI usage are non-uniform across departments, introducing compliance risks and eroding workforce trust.
• Patchwork proficiency : The organization develops isolated pockets of hyper-specific AI skills but fails to establish a common lexicon or shared capability base.
This deficiency reflects a critical absence of a unified learning standard and exacerbates the risk of a digital skills divide.
The ambiguity regarding the institutional ownership of AI training (whether L&D, IT or the individual business unit) creates a structural vacuum frequently filled by departmental silos. This is particularly problematic concerning the transition from foundational AI literacy to specialized AI fluency.
• Stalled fluency: While basic literacy is typically addressed, achieving fluency — the strategic ability to integrate AI into complex workflows — is where training efforts frequently falter.
• Inefficient learning and resource misallocation: When training is confined to silos, employees acquire general, decontextualized AI skills. They subsequently struggle to translate these generalized skills into tangible, role-specific value.
The consequence of the preceding two pain points is the creation of a fragmented, unreliable representation of the organization’s collective intelligence. Because training delivery is inconsistent and evaluation is siloed, L&D lacks any singular, comprehensive mechanism to track AI skill acquisition, workflow application and performance impact.
Without a unified “big picture” linking training expenditures directly to validated business outcomes, the training manager’s function remains constrained to a purely service role, unable to substantiate the strategic return on investment (ROI) of AI enablement.
This fragmentation (a.k.a. “messiness”) is your opportunity for a strategic pivot: to redefine the training manager’s function from a service provider to a strategic capability architect. This transition is executed through the three key pillars of the unified AI capability model.
The resolution to inconsistent enablement lies in establishing a shared, organizationwide definition of AI capability that is non-negotiable and specific to role performance. The L&D mandate is to collaborate with executive and business leaders to implement this capability model, which involves:
• Standardizing AI curricula: Formalizing clear, tiered learning pathways (e.g., from literacy to fluency to mastery) with consistent content across all departments. This directly supports the goal of building AI confidence across the workforce and mitigating common resistance scenarios.
• Establishing governance: Implementing top-down ethical and security standards for relevant AI tool usage. This centralized approach is essential because strategy alone fails without the right cultural alignment and support.
• Focusing on role-specific impact: Ensuring every training is explicitly linked to a measurable job function.
To dismantle departmental silos, prioritize solving high-value, cross-functional business problems.

To dismantle departmental silos, prioritize solving high-value, crossfunctional business problems. This strategic shift from merely providing content to driving organizational evolution aligns with the principle of “future sensing” for skills development.
Driving organizational skills development strategy must begin as a business priority (HR, talent and business) where the L&D function drives that strategy, and not the other way around.
• Investment in solutions, not just courses : The L&D strategy must evolve to include investments in tools and infrastructure that necessitate collaboration.
• Building shared AI agents: Support developing custom AI agents (e.g., a “risk assessment agent”) utilized collaboratively by multiple departments.
• Empowering human-AI orchestration: The focus shifts from merely training users on tool operation to training personnel on how to strategically orchestrate the tool. This includes developing explicit processes for “onboarding” new AI-powered tools into the workforce.
The final pillar tackles the core issue of fragmented intelligence by empowering the managers who sit closest to the daily workflow: the middle managers.

• Partner for Precision (Pillar 1): Don’t build in a vacuum. Bring partners from the business into the conversation early; they know where the real performance gaps are.
• Bridge the Gap (Pillar 2): Think beyond the launch. Develop a “success roadmap” that includes not only short-term AI adoption but also sustainable, long-term management of the AI implementation.
• Design with Discernment (Pillar 3): Protect the “very human” aspects of the workflow. Avoid the mistake of replacing high-value, interactive programs — like leadership development — with AI assistants.

• Facilitating intentional social learning: Training managers must build strong partnerships with middle managers. This partnership takes a hands-on approach, ensuring that data-driven insights are shared through meaningful personal interactions and a structured social learning system.
• Training for scaling knowledge with AI: Provide middle managers with explicit training on how to coach their teams on AI use. Leverage AI not to automate this human-centric process, but to optimize and scale it by helping identify subject-matter experts and curate unstructured social knowledge.
• Closing the feedback loop: Managers become the crucial link between the centralized AI capability model (Pillar
1) and the real-time data on employee performance (Pillar 3).
The value of the unified AI capability model is that it operates not as a static centralization project, but as a continuous improvement engine.
While Pillar 1 establishes the initial, unified governance foundation, the “magic” lies in the regenerative feedback loop created by Pillar 3. The performance data and manager insights gathered on workflow impact continually flow back to refine the standards and curricula set in Pillars 1 and 2, ensuring L&D is
perpetually optimizing organizational intelligence in real-time.
By addressing the systemic dysfunctions of fragmentation and inconsistency (the “messy” state), we utilize this framework to lead L&D’s necessary evolution from a service provider to a strategic capability architect. The moment is now to partner as the organization’s capability engines — transforming messiness to a magical flow.
Somya Dwivedi-Burks, CPTM, Ph.D., works at the intersection of learning design, performance improvement and employee experience. A learning strategy consultant at heart, she has served organizations supporting the learning and development function for over 10 years. Email Somya.

Many organizations are making significant investments in talent development. Yet many still struggle to make a strong case for the value of those programs. Even with a robust measurement and evaluation process through questionnaires, posttraining surveys or completion rates, many learning and development (L&D) teams deliver reports that inform rather than influence. This type of reporting produces a lot of data proving that learning happened, but it’s often lacking real meaning. Impactful, decision-making insights go beyond data and must be linked to performance outcomes and behavior change.
With the rapid growth and increased access to artificial intelligence (AI) in training, there is an opportunity to leverage technology not only to expedite the learning analytics process but also to generate insights and evidence to demonstrate the impact of training and development initiatives. With AI, L&D professionals can enhance their learning analytics processes, discover patterns and
trends, identify behavioral change and gain a deeper understanding of learners’ experiences. This transformational power doesn’t just make things faster; it significantly enhances the effectiveness of L&D programs.
The primary goal of integrating AI into the learning analytics process is not to automate decision-making, but to augment it through better understanding and awareness.
AI’s ability to augment understanding and insight can be applied across the full learning analytics lifecycle — design, collection, analysis and action.
Generative models can analyze course documents and produce actionable metrics focusing on performance indicators:
Use these prompts to extract key performance indicators (KPIs) and outcome metrics:

“Identify 3–5 key learning outcomes or KPIs from this training document.”

“Suggest quantitative and qualitative metrics to evaluate learner achievement of the course objectives.”
Use these prompts for training content and evaluation design:

“For each learning objective in this guide, suggest an associated measurement tool or evaluation approach.”

“Create a post-training survey plan with categories for knowledge, skills, behavior, and satisfaction based on this content.”
2)
AI can generate focused survey questions that capture learning experience, sentiment, and behavior.
Use these prompts to generate evaluation survey questions:

“Create two questions to evaluate

“Draft a stakeholder update that includes key insights, participant sentiment and suggested improvements.”

Run summarization and theme clustering to isolate barriers to constructive feedback.














Try these prompts for survey response summarization:

“Summarize the main themes in these open-ended survey responses.”

“Highlight the top three areas of concern expressed in this training evaluation.”
4) Action:
When you are ready to take action on the data you’ve collected, AI can draft decision-ready summaries, visuals and prioritized recommendations.
Use these prompts to summarize reports for stakeholders:

“Write an executive summary based on this learner feedback. Highlight major strengths and areas to improve.”
A medium-sized health care agency launched an emerging leadership development program and achieved high satisfaction scores, yet no observable behavior change was reported postprogram. Using generative AI to summarize over 200 pages of coaching notes and to compare pre- and postprogram 360-degree feedback, a specific gap was identified: managers lacked confidence in delivering constructive feedback.
The agency used AI to tackle this issue using the following steps. They prompted AI to:

Consolidate coaching notes into a single, anonymized dataset.





“Recommend the best chart for pre/ post 360° confidence and add a ≤15word caption stating the insight.”


The L&D team leveraged AI in the design-collection-analysis-action loop. They focused on three evidence-based

insights highlighting areas of concern, including manager confidence giving constructive feedback, observed feedback quality and team sentiment about receiving “useful feedback.”
The AI tool summarized 200+ pages of coaching notes, and AI generative summarization and cluster themes confirmed the learning gap. The team compared pre-/post-360degree responses by team and role. Using the evidence-based insights, the team made minor changes in their existing program. Targeted role-plays with coaching scripts and practice opportunities were introduced in the curriculum, along with a micro-checklist to structure feedback conversations and a set of short follow-up coaching cadences. An AI-informed short executive brief and a simple dashboard translated the insights and the “why” of the low adoption into decision-ready guidance for leaders.
Following these changes, the next program cohort results included a 21% improvement in 360-degree confidence, and observation notes documented clearer, timelier feedback conversations.
The success of the case was not just the 21% improvement. It was the insight to convert the data into practice assets (role-plays, coaching scripts, checklists).
The design-collection-analysis-action loop process was run repeatedly, allowing the agency to re-measure against the same KPIs and verify continued improvement.
THe roAD AHeAD: FIve ways AI wIll resHApe MeAsureMenT
Growth in the use of AI will transform learning analytics from simple data reporting to a more continuous improvement process. Measurement and evaluation will move from periodic analysis to real-time understanding. Decisionmaking will be grounded in evidencebased action and course summaries will become strategic conversations.
Five shifts already coming into view include:

Adaptive measurement: Metrics and KPIs refine themselves cohort by cohort, improving data quality and reducing ineffective and low-value measurement.

Predictive risk and reinforcement: Models flag transfer risk or skill decay and recommend timely nudges, practice reps or coaching before performance slips.

Copilot analytics: Embedded assistants narrate trends, surface outliers and draft stakeholder-ready

summaries complete with caveats and next steps.
Integrated evidence fabric: Learning data connects with human resources information systems (HRIS), quality assurance (QA) and customer relationship management (CRM) to show downstream effects on quality, safety, customer outcomes and retention.

Automated storytelling, auditable by humans: Reports assemble quickly, backed by evidence and human review for accuracy, context and fairness.
Where can L&D start, and how can AI be leveraged now? Start with a simple action by piloting the process in one course or program. Apply success to other programs and layer more complex actions. Allow AI to guide you as you guide AI. AI is not replacing human understanding; it is amplifying it. L&D’s advantage has never been about data; it is about insights and interpretation. Measure smarter. Act faster. Then, measure again.
Christopher Massaro, CPTM, is a learning and development professional with experience across corporate and educational environments. His work focuses on learning analytics, leadership development and positive psychology to create human-centered learning cultures. Email Christopher.


By Jason Fox
Large language models (LLMs) like ChatGPT are quickly becoming part of how everyday work gets done. Yet while some employees have already embraced artificial intelligence (AI) tools, many still approach them with uncertainty or even anxiety. Creating a beginner’s course that introduces employees to generative AI can build confidence, clarity and capability. For instructional designers and industry trainers, that means designing a course that meets diverse learners where they are, teaching prompt writing as structured thinking, redefining the learner’s role from author to editor and providing meaningful opportunities for hands-on practice.
This four-part framework offers a clear path for organizations to help their people become capable, responsible AI users.
The single most important concept to teach in any beginner’s AI course is that LLMs do not know anything. The AI does not think, reason or understand. An LLM functions by predicting what word is most likely to come next based on patterns it has learned from enormous amounts of text. It is not retrieving facts from memory but completing a pattern that seems statistically likely.
When learners understand this, it changes how they view every interaction with AI. They recognize that the model is guessing rather than knowing, which explains both its
Show how AI’s knowledge is pattern recognition, not understanding.
usefulness and its flaws. It also lays the foundation for the rest of the course: why precise prompting matters and why human editorial review is essential.
Because AI lacks true understanding, it sometimes produces confident but incorrect statements, a phenomenon known as hallucination. Teaching this early demystifies the tool and helps learners see that errors are not failures of technology but natural results of prediction without comprehension. This clarity prevents misplaced trust in AI outputs and encourages a more critical, guided approach.
To make this idea stick, focus on tangible metaphors and layered explanations rather than technical depth. Start with a concrete analogy. Describe AI as a highly skilled intern who can mimic almost any writing style but has no lived experiences. The intern can produce a convincing report on any topic, yet their confidence does not equal knowledge. They are excellent at form, but their substance depends entirely on what you provide.
This analogy helps learners visualize AI as articulate but uninformed, fast, fluent and capable of error. It naturally leads into the next lesson: since the model only predicts, the quality of your prompt determines the quality of its response.
Emphasize curiosity over complexity. Show how AI’s knowledge is pattern recognition, not understanding. If you ask it to write about a product that does
not exist, it may invent details because it is trained to fill gaps rather than admit uncertainty. Use short demonstrations of these hallucinations to reinforce the point that the model’s confidence is statistical, not intellectual.
When teaching how AI works, it is easy to drift into the mechanics of models, servers and datasets, yet most learners gain little from this detail. What matters is not how the system is wired but how it behaves. This distinction is far more valuable than knowing what an algorithm looks like.
Younger learners may enjoy light technical context, such as learning that AI processes text in fragments called tokens and predicts likely word patterns. Older learners may benefit more from analogies that make the concept practical. In both cases, avoid jargon that distracts from comprehension. The purpose is to build confidence, not confusion. Effective instruction keeps curiosity alive without overwhelming it.
Instructional Takeaway: Every learner, regardless of generation or role, should leave this section understanding that AI does not know anything; it only predicts. That insight justifies the need for strong prompts and reinforces why human editing is indispensable.
Once learners understand that AI is essentially trying to finish their
relationship with AI and reinforces human expertise.
sentence, they can appreciate why clear and structured prompts matter so much. Poor prompts produce vague or misleading results because the model lacks understanding; it only has probability.
To help beginners approach prompting with confidence, teach the P.R.O.M.P.T. Framework, a practical guide for crafting thoughtful, repeatable requests. The format is exhaustive, meaning learners may not need every element for every situation, but it provides a complete mental model they can scale up or down depending on the task.
Purpose: Identify what you want the AI to accomplish. Is the goal to summarize, analyze, brainstorm or generate new content?
Role: Assign the AI a perspective or professional identity that aligns with the task. This helps frame tone and depth of response.
Output: Define the desired format or deliverable, such as a list, paragraph, script or slide outline.
Method: Explain how you want the model to proceed. Include steps, examples or guiding questions to shape reasoning.
Parameters: Set limits or boundaries, such as word count, time frame, audience or content exclusions.
Tone: Specify the style or emotional quality you want the writing to
convey, such as formal, encouraging or conversational.
Weak prompt: “Write about leadership.”
Strong prompt using P.R.O.M.P.T: “Act as a leadership coach (Role) summarizing three inclusive leadership strategies (Purpose) for new supervisors (Parameters). Provide your answer as a short article (Output) that outlines the steps clearly (Method) and uses a professional but conversational tone (Tone).”
After demonstrating this structure, have learners practice adjusting one or two elements, perhaps changing the role or tone, to see how AI’s response shifts. Encourage reflection on which components made the biggest difference and why.
Instructional Takeaway: Teaching P.R.O.M.P.T. transforms prompting from guesswork into intentional design. It builds learner confidence and reinforces that AI’s usefulness depends entirely on the clarity of human input.
Generative AI changes how content is created, but it does not replace the human element. Instead of beginning with a blank page, professionals now begin with a draft. Teaching learners to move from author to editor reframes their relationship with AI and reinforces human expertise.
Efficiency: AI accelerates initial creation, allowing more time for strategic refinement.
Quality: Human review ensures factual accuracy, tone management and brand consistency.
Ethics: Editing guards against bias, misinformation and over-reliance on machine-generated text.
This mindset shift strengthens learner confidence and critical thinking. They realize that editing AI’s output is not optional. It is the safeguard that maintains credibility.
THE ITERATIVE EDITORIAL PROCESS
An effective beginner’s course should teach learners to engage in an iterative editing loop. This process models realworld collaboration between humans and AI and can be practiced with short assignments.
1. Generate (AI as Author): a. Create an initial draft using a structured prompt. b. Example: “Draft an email announcing our new mentoring program.”
2. Review (Human as Evaluator): a. Examine the draft critically. What is missing? What feels off? Identify errors, tone mismatches or unsupported claims.
3. Refine (AI as Assistant): a. Feed targeted feedback back into the model. b. Example: “Revise
this paragraph to sound more conversational and emphasize employee development.”
4. Verify (Human as Editor-in-Chief): a. Check all facts, ensure tone and structure align with organizational standards and confirm that the output supports the intended purpose.
5. Finalize (Human and AI Partnership): a. Polish the final version, making human adjustments for clarity, empathy and voice.
Visualize this as a linear process with an iterative cycle between the review and refine steps until the desired result is met. Each round of review and refinement improves the product. The final steps of verify and finalize reinforce responsible human judgment.
Instructional Takeaway: Learners should understand that AI is a capable collaborator but not an authority. The editor’s role ensures that accuracy, ethics and intent remain human controlled.
Theory alone cannot build confidence with generative AI. Learners need structured opportunities to experiment, reflect and apply what they have learned. Effective practice bridges understanding with performance.
Guided Prompts: As stated before, begin with instructor-led demonstrations of the P.R.O.M.P.T. Framework. Then allow learners to modify one element, such as tone or method, to observe how the model’s response changes.
AI Sandbox Sessions: Provide handson lab time in a secure environment where learners can test prompts freely. Remind them to avoid using proprietary
data or personal information, modeling responsible experimentation.
Scenario-Based Challenges: Assign real-world tasks that allow learners to apply prompting, editing and critical thinking skills in authentic contexts. AI can also be used to generate materials for these scenarios, including simulated email requests, mock policy issues and sample datasets. This approach provides realistic yet low-risk practice opportunities and reduces the need to source proprietary information.
Peer Feedback and Reflection: Have learners exchange outputs and discuss what worked and why. Short reflections can reinforce insight, asking questions such as “How did AI help or hinder your process?” or “What prompt changes made the biggest improvement?”
Ethics in Action: Integrate short discussions about responsible use directly into exercises. Encourage transparency by acknowledging when AI contributed to an idea or draft.
Instructional Takeaway: Practice converts awareness into capability. Through guided experimentation and reflective discussion, learners move from theoretical understanding to confident ethical application.
Teaching AI literacy is not about turning employees into data scientists. It is about helping them think critically, communicate clearly and partner responsibly with technology. A well-designed beginner’s AI course built on this four-part model transforms curiosity into competence.
Your learners will leave with a clear grasp of how AI works, some guided practice and, most importantly, an understanding why their input and judgment are essential to their success.
Stretching takes focus and deliberation in the moment, but it’s also about being set up for success.
Give learners plenty of opportunities to practice their new AI skills in low-risk environments. Examples include:
• Drafting initial versions of communication materials
• Summarizing information
• Developing and analyzing policies
• Creativity and brainstorming activities
• Designing training scenarios and simulations
• Analyzing and interpreting data
Jason Fox is a learning and development specialist who has worked for the Department of Defense Education Activity, the United States Air Force and the Environmental Protection Agency. He focuses on organizational development and designs innovative training solutions for a workforce of over 1,500. Email Jason.

If you believe the onslaught of social media headlines that say artificial intelligence (AI) is taking over everyday tasks and eliminating jobs, then you are probably wondering about your role in learning and development (L&D). Many instructional designers are already using AI in their daily work to speed up development time and enhance design quality. With the meteoric rise of AI capabilities, it’s even possible for an AI-driven avatar to replace human facilitators in virtual learning experiences. The question is: should they?
For facilitators, change has been a constant over the past few decades. The role has evolved from traditional classroom trainer to virtual presenter to hybrid learning expert. In all these
settings, facilitators add value to learning experiences by delivering interactive programs and helping learners apply new knowledge and skills to the workplace. But can they maintain this relevance? With the advancement of technology, is all learning shifting to asynchronous eLearning modules and bot-driven simulations?
Before answering those questions, let’s step back to consider how AI avatars can be used in a virtual classroom. AI avatars have come a long way since the famous talking lawyer-turned-cat video that went viral in 2020. They are now realistic digital beings who appear lifelike on camera. They provide explanations, guide learners through a program
and can act out scenarios in real and relatable ways. When combined with a reputable large language model (LLM) and authentic voice, AI avatars can speak bi-directionally, meaning they can carry a credible conversation with a human. And their quality is improving with every programming iteration and new advancement in technology.
Before dismissing this technology as science fiction, consider the practical advantages already being realized. First, avatars are always available, 24/7. This is useful for global organizations that need training around the clock or those that need to rapidly scale their training programs. Next, avatars can allow for just-in-time training. They can provide feedback, which is useful when practice is needed
now. Learners don’t have to wait until the next training class. For example, a manager can practice delivering constructive feedback immediately before a scheduled performance conversation. Or a project team facing a daunting challenge could hop into a collaboration room to have a facilitated brainstorming discussion.
Personalized instruction is another benefit of using AI avatars as facilitators. Tools can quickly customize a tailored learning path for each learner based upon their diagnosis of an issue, suggest the right training program for that person’s need and provide support in their native language.
While your initial reaction to an avatar facilitator might be an immediate “no,” research shows that people are becoming more accustomed to interacting with robots and artificial intelligence than ever before. Some may even prefer communicating with avatars over humans. And as they get more advanced, there may be a point soon when people are unable to tell when they are talking with a digital being instead of a live one.
But there is a downside here. Who among us hasn’t argued with Siri or Alexa and gotten frustrated from their misunderstanding? Or been redirected to the wrong place after a mispronunciation? If the interaction with an AI avatar is even a little off kilter, it can detract from the learning experience, which in turn will impact the learning outcomes.
Despite its impressive capabilities, AI avatars cannot replicate the full spectrum of human facilitation. There are four specific circumstances when human facilitators are essential for success:

input or tracking them. This realization, even at the subconscious level, may get in the way of learning. For example, if the training topic is HR or compliance related, there is potential for sensitive subjects. Participants may be reluctant to ask honest questions for fear of consequences. In these cases, human interaction helps increase psychological safety, which is a necessary component in learning experiences.
2. When learners would benefit from a shared debrief of the experience.
As experienced facilitators already know, learning happens during moments of reflection. Stepping back to analyze their behavior, participants can learn from their reactions to varying situations. Skilled facilitators can then draw out this learning through intentionally planned debrief discussions. While an AI avatar may be able to ask questions, humans can pick up subtle nuances that may be otherwise missed.
For example, learners who are roleplaying difficult conversations may be triggered by the other persons’ strong reaction and response. Or learners practicing a safety process in a risky setting might become overly anxious or agitated. Having a human presence in the virtual training room can help defuse intense feelings.
4. When human intervention is required. Virtual learning experiences, by design, are social experiences. Peers work together to solve a problem, collaborate on solutions or complete learning tasks. A facilitator may be needed to control the scenario based upon participant decisions or to encourage the group to the end. For example, client service reps learning how to handle delicate customer issues in realistic situations could become discouraged and disengaged without a facilitator who picks up on the group sentiment and quickly acts to remedy it.
1. When learners have, or have potential for, lack of trust with a digital avatar.
When a learner discovers they are communicating with AI, they may feel like the algorithm is recording their
3. When the learning is a highly emotional experience.
As technology advances and virtual learning experiences become more immersive and realistic, a skilled facilitator can help the participants process through these emotions to both de-escalate the feelings and to draw out the lessons learned. And if needed, a facilitator can pause the storyline to help participants take a quick break.
So, what’s an organization to do? Fully embrace the benefits of AI avatars in learning experiences? Ignore the technology and stick with human connections? The ideal solution is in the middle: use the best of both worlds. Combine the efficiency of AI with the guardrail of human input. Allow the human touch to balance the digital tech.

Here’s what that might look like in practice:
• During a human-facilitated virtual class, participants can connect with an AI avatar to get support answers for common tech issues. This keeps the facilitator focused on the learning.
• As the facilitator leads group activities, an AI avatar can summarize brainstorming discussions and keep track of action items for skill implementation. Clearly delineating responsibilities between the two entities can maximize efficiencies.
Both scenarios create a comfortable partnership between the human and the AI driven tech and let them each shine in their strengths: speed and efficiencies of AI, nuance and intuition of human.
Regardless of how your organization implements AI technologies into learning experiences, there is no question that the facilitator role must change to meet current expectations. Human facilitators are not being fully replaced by AI solutions, but they must adapt to them. The most important question now is how to prepare facilitators to stay relevant in modern learning environments.
The vital skills that human facilitators need to be successful today include:
1.
1. Asking insightful questions that lead to learning discoveries. This
2.
means going beyond surfacelevel comprehension checks to probing questions that challenge assumptions. It also means helping learners connect new concepts directly to their workplace challenges.
2. Active listening, especially in digital environments. Virtual facilitators must pick up on vocal tones, hesitation patterns and what’s not being said, especially when visual cues may be limited.
3.
3. Leading virtual conversations well. This includes drawing out quiet participants who may be reluctant to engage and creating a learning space where complete discussions can happen without the natural conversation flow of inperson settings.
4.
4. Connecting learning to real-world challenges and solutions. Effective facilitators bridge theory to practice by using relevant examples, helping participants plan immediate application steps and creating accountability for learning transfer.
5.
5. Staying abreast of, and mastering, technology tools. This means understanding not just how current
platforms work but when and how to deploy updated features. Facilitators must know which tools enhance the learning experience and how to incorporate them strategically.
Organizations that invest in their facilitators as much as they invest in technology will be ahead of the strategic game. No matter what AI solutions become available, human relationships will remain at the center of effective learning. The interpersonal skills necessary to show respect, build rapport and help learners feel seen and heard will always define successful facilitation.
Reading emotion and responding with genuine empathy are irreplaceable capabilities. Remote participants, who often feel vulnerable as they learn new concepts and try new things, need facilitators who can meet them in that nuanced state and help them overcome obstacles.
These deeply human qualities cannot be programmed into algorithms. Facilitators who cultivate them will remain essential to learning success.
Cindy Huggett, CPTD, helps organizations and training professionals create engaging virtual and hybrid learning with lasting results. Email Cindy.


Which statement best reflects where you are in your career journey?
• I’m mastering the fundamentals of training management.
• I’m leading programs and want to expand my influence.
• I’m setting the direction for L&D across the organization.
Complete the quiz to evaluate your current role, competencies and goals to see if the executive level is the next step in your career!
Learn more about the Training Industry Senior Leaders Program.
TAKE THE QUIZ

By Matt Donovan and Geoff Bloom
During the first .com boom, businesses scrambled to understand the internet’s potential. Amazon famously took nearly a decade to become profitable, and Jeff Bezos tirelessly explained to investors why change was necessary while clinging to his vision of the future. Today, learning and development (L&D) faces a similar challenge — the challenge of producing now while planning and preparing for what’s to come.
L&D teams are on a journey to meet the expectation that they use artificial intelligence (AI) to deliver faster, better and more effective learning. At the same time, learning leaders are preparing for a future of learning that looks radically different than today’s AI strategy, which often means simply layering AI use onto legacy L&D practices. How do L&D teams and learning leaders balance these competing demands now? Can we meet today’s demands while preparing for tomorrow’s?
Learning design teams need to be comfortable with AI. They need clear usage guidelines, shared resources and functional, collaborative workflows. At the same time, learning leaders need to be preparing their teams for transformation.
To illustrate the range of AI maturity and adoption, we created the following fivephased maturity model based on our experience with clients and the broader industry. While other AI maturity models exist, our goal was to create something practical that can serve as a roadmap and provide an intentional and structured way to assess 1) where your team is, and 2) where your team needs to go.
• Phase 1. Ad Hoc: Individuals use AI sporadically and independently. There is no formal process, governance or shared knowledge across teams or the business. Individuals may have a foundational understanding of AI, but its adoption is generally low, and experimentation is informal.
• Phase 2. Exploratory: Small groups of individuals explore AI together, and they are in the early phases of

SOURCE: GP STRATEGIES

sharing best practices and templates. Foundational practices like role clarity and workflow are beginning to form. The AI capabilities these teams are exploring are usually role specific.
• Phase 3. Structured: Teams adopt standardized processes for AI use. Governance, version control and ethical guidelines are established. Teams select AI tools tailored to their needs and begin making AIinformed decisions.
• Phase 4. Integrated: The organization begins to intentionally embed AI into core workflows across multiple teams. The organization is beginning to integrate AI crossfunctionally across L&D, human resources (HR), compliance and IT in order to manage and optimize processes.
• Phase 5. Transformational: The organization is purposefully leveraging AI to drive strategic transformation. Enterprise-wide adoption, continuous improvement and measurable impact are now the norm. Human-AI collaboration is also normalized, and responsible AI use has become a core value.
Transformational AI use will allow organizations to become truly learnercentric by making them more agile in the face of change, more connected and more collaborative.
Ultimately, this will enable the learning industry to move from being rigid and course-centric to providing layered learning experiences that evolve over time based on individual learner needs. This means that, in Phase 5, learning practitioners will have shifted from creating static content to enabling adaptive learning ecosystems through atomic instructional design — something that, while seemingly futuristic, is becoming increasingly real.
While the model outlines the full spectrum of maturity, our work in the industry has shown us that most teams are in Phase 2 or early in Phase 3. Most AI use is still role-specific, but structure around guidelines is beginning to form and tools are being shared.
• Phase 1. Ad Hoc: Individuals use AI sporadically with no formal process, governance or shared knowledge. Adoption is low, and experimentation is informal.
• Phase 2. Exploratory: Small groups explore AI together, beginning to share best practices and templates. Role clarity and workflows are forming, and capabilities are usually role specific.
• Phase 3. Structured: Teams adopt standardized AI processes. Governance, version control and ethical guidelines are established. Teams select tools tailored to their needs and make AI-informed decisions.
• Phase 4. Integrated: AI is intentionally embedded into core workflows across teams. Integrate crossfunctionally to manage and optimize processes.
• Phase 5. Transformational: The organization leverages AI for strategic transformation. Enterprise-wide adoption, continuous improvement and measurable impact are the norm, with human-AI collaboration and responsible use as core values.
The priority for organizations in these early phases lies in embedding AI into L&D workflows, which requires process integration. For this integration to happen and to prepare for future transformation, learning teams need to operate with clarity and structure, supported by both traditional project management practices and AI-enhanced capabilities.
For your team to become less exploratory and more structured in its AI use, start with these foundational practices:
Why? Building a central repository for prompts and agents enables teams to share, refine and reuse best-practice queries and engineered prompts. This improves consistency, reduces duplication and accelerates development, making it easier for teams to collaborate and maintain quality across AI-driven projects.
How? Treat prompts and agents as deliverables in their own right, analogous to tools and templates in an engineering workshop, and manage them with the same attention to tolerance, wear and tear and entropy. Adopt version control for engineered prompts and agents, ensuring they are accessible and updatable by the team.
Why? Documenting changes to prompts ensures auditability, supports refinement and makes knowledge sharing easier.
Version control also helps teams revert to previous states when needed, reducing risk and improving transparency.
How? For saving multiple versions of prompts, use tools that provide the ability to go back to a previous version, ensuring that the workspace or components can be worked on by multiple roles with visibility of contributions.
Why? This fosters accountability and enhances teamwork, ensuring that edits and decisions are traceable and aligned with project goals.
How? Select AI tools that support collaborative design and allow for parallel projects involving both internal and client personnel. Use shared workspaces where designers and contributors can work together on outputs, with visibility into who has contributed and made changes. Ensure the workspace/components can be worked on by multiple roles, similar to how a Word document accepts tracked changes, providing transparency and traceability.
Why? Data provenance practices ensure that every piece of generated content can be traced back to its origin, which is critical in regulated environments and for confidential material. This practice supports compliance, builds trust and simplifies audits.
How? Include references to where combined data has come from (source acknowledgments and tracking). Validate the provenance of additions to the dataset, ensuring that all inputs are managed and traceable.
While many learning teams are occupied with structured AI use (embedding it into workflows and processes), they must also be preparing for this future that will shift their role from delivering content to enabling performance.


Why? This approach reduces manual fact-checking and ensures AIgenerated content reflects the latest authoritative information.
How? To use RAG, specify the exact data sources within the larger dataset for the AI to reference, such as internal policy documents, compliance checklists or trusted databases. Provide the AI tool with relevant documents and checklists so it can generate content that is accurate, up-to-date and aligned with internal standards.
RAG can be done either by user intervention or by using an embedded large language model (LLM) that searches and ranks documents within the dataset.
Once foundational practices are in place, the next horizon is performance enablement, where learning becomes dynamic and embedded in work. At this point, learning design teams will need to shift from creating static content to building adaptive ecosystems where learners influence outcomes. Objectives will evolve dynamically, guided by context and supported by AI-driven personalization.
While many learning teams are occupied with structured AI use (embedding it into workflows and processes), they must also be preparing for this future that will shift their role from delivering content to enabling performance.
Here’s how your team can start building the mindset needed for the future of learning:

• Embed learning in the flow of work. Integrate development opportunities directly into employees’ daily workflows. When learning happens in real time, it feels practical and immediately applicable, driving productivity, adaptability and continuous skill growth — cornerstones of performance enablement.
Transformational AI use will allow organizations to become truly learner-centric by making them more agile in the face of change, more connected and more collaborative.

• Personalize learning. Design experiences that reflect individual career aspirations and encourage cross-functional exposure. Tailored learning boosts engagement, strengthens internal capabilities and supports talent retention. Personalized, purpose-driven development is essential for cultivating future leaders.
• Build future-ready skills. Focus on capabilities like adaptability, strategic thinking, digital fluency, data analytics and AI literacy. Prioritizing these areas enables L&D teams to craft data-informed, personalized learning experiences that move beyond content delivery to truly empower performance.
• Become a strategic business partner. Ensure every learning initiative is tied to organizational objectives. This alignment turns L&D investments into strategic levers that deliver measurable impact and reinforce enterprise goals.
We have been trying to solve these problems for years, so they are not new considerations in our industry by any stretch. The difference is that now, with AI, we can actually deliver on them.
AI is changing the way learning happens and L&D teams need a clear path to adapt. Our AI maturity model provides that roadmap, guiding teams from early experimentation to full integration and strategic impact. The practical steps outlined earlier are essential building blocks. These actions help teams establish structure, improve efficiency and prepare for advanced phases where AI becomes
part of everyday workflows and redefines learning as we know it.


Learning teams have two choices: embrace and harness the disruption or be disrupted. Organizations that start now will be ready to lead in the future.
Matt Donovan, chief learning and innovation officer for GP Strategies, is a recognized name in learning, bringing 25+ years of crafting learner-centric solutions and leading high-impact teams.
Geoff Bloom is a principal learning consultant at GP Strategies with over four decades of experience in learning design, scenario-based training and digital transformation. Email the authors.
• Envisioning an AICentric Approach to Design and Delivery
• AI Governance in Practice Report 2024
• From experiments to deployments: A practical path to scaling AI
• Reinvent or rust: The learning leader’s guide to AI maturity

By Dr. Grace Chang and Dr. Karina Freitag
Learners today face an overwhelming influx of information. Their attention is fragmented by a steady stream of notifications, emails, meetings and shifting priorities. The result is a growing cognitive load that strains the brain’s natural capacity to process and retain information.
As artificial intelligence (AI) accelerates the pace of change, knowledge workers are expected to learn faster and adapt more quickly or risk being outpaced by AI-savvy peers or even replaced by AI itself. The pressure to keep up has never been greater. Yet, the human brain wasn’t built for this level of ongoing mental strain.
So, how can we design learning that truly works?
While technology and AI are often seen as contributors to cognitive burden, they can also be powerful tools for reducing it — if used strategically. This article explores how AI can support learning designers in creating experiences that are not only more effective but also more sustainable for today’s workforce.
To acquire long-lasting knowledge, the brain must successfully go through two key steps: encoding and consolidation.
1. Encoding is the process of transforming information into a format that the brain can store. This process is facilitated by working memory, which provides a mental workspace for holding and manipulating information we are currently focusing on. When we actively engage with this information by repeating it to ourselves or connecting it to things we already know, we increase the likelihood of successful encoding.
2. Consolidation is the process of stabilizing and integrating that information with existing memories stored in long-term memory. It transforms something temporary into durable memories that can be later retrieved and used. Even when encoding is successful, most information is quickly forgotten unless it undergoes consolidation.
As learning designers, we can shape experiences that support both encoding and consolidation, making it more likely that new information is truly learned. One powerful way to do this is by designing with cognitive load in mind.
Cognitive load is the mental effort it takes to process information in our working memory. Think of working memory not as a spacious warehouse, but as a small bag that can only hold a few items at a
time. For example, when we try to absorb a new concept during a training session while also replying to emails, tracking upcoming meetings and mentally preparing for a presentation later in the day, our mental bag — our capacity to process and retain information — quickly becomes overloaded. When the cognitive load is too high, meaning a task or learning experience demands more than our mental bag can carry, things start to spill out.
High cognitive load is the norm today, resulting in mental fatigue, fragmented attention and learning that doesn’t stick. By designing learning experiences that manage cognitive load, we can help learners focus on what matters, retain information and apply it when it counts.
AI presents a paradox for today’s learners. On one hand, it contributes to cognitive burden, accelerating the pace of change and requiring constant upskilling on new tools, content and processes. This adds to an already overloaded mental workspace.
On the other hand, when used intentionally, AI can help alleviate that very burden. Learning designers can leverage AI to personalize learning, reduce unnecessary complexity and
surface what matters most, making it easier for the brain to encode and consolidate new information.
Rather than overwhelming learners with more AI-generated content, we can position AI as a strategic partner: one that filters out noise, directs attention to what matters and helps build learning experiences that align with how the brain learns best. Let’s explore how.
1. Prioritize the essential.
Focus on what matters: Learning objectives act like a packing list for a trip. Just as you wouldn’t pack snow boots for a beach vacation, you shouldn’t include content that doesn’t directly support the learning goal. When objectives are specific and concise, such as “describe what makes feedback effective,” they help sort through content and eliminate what’s unnecessary. This reduces extraneous cognitive load by narrowing the focus to what truly matters.
AI tools can help you prioritize the most relevant content by reviewing your objectives and flagging content that may be redundant or off-topic.
Use this AI prompt: “Analyze this document and highlight only the sections that directly support the learning objectives: [specify the learning objectives]. Remove any content that doesn’t align with these objectives.”
2. Highlight relevance.
Make it personally relevant: Content that feels personally relevant to the learner is more likely to be processed deeply. When learners recognize how content connects to their own goals, experiences or challenges, it becomes more meaningful. That relevance activates their intrinsic motivation, which enhances focus and supports deeper encoding.
Use this AI prompt: “Generate a ‘what’s in it for me’ introduction for managers and individual contributors learning these skills.”
Make practice realistic: Practice helps consolidate learning, but not all practice is equal. Learners are more likely to stay
engaged and remember what they’ve learned when they can apply new knowledge in realistic, varied situations that feel relevant to them. Practicing in authentic scenarios not only reinforces understanding but also strengthens memory consolidation. This kind of practice supports long-term learning and improves the transfer of knowledge to real-world situations.
Use this AI prompt: “Generate three realistic workplace scenarios where learners can practice using the feedback model from this module. Each scenario should describe the context and include a specific situation that requires giving feedback.”

As learning designers, we can shape experiences that support both encoding and consolidation, making it more likely that new information is truly learned. One powerful way to do this is by designing with cognitive load in mind.

3. Structure for clarity.
Organize information: Our brains crave patterns and predictability. When learning flows logically and is chunked into digestible segments, cognitive load is reduced and it’s easier for our brains to digest and integrate with existing stored knowledge. Ways to organize information include chunking and scaffolding.
• Chunking is the practice of grouping information. For example, if you’ve changed a lightbulb many times before, you don’t need to remember each step individually; your brain has grouped them into a single schema: change lightbulb.
• Scaffolding is a way to prioritize and sequence information so that foundational knowledge comes first, followed by new or progressively more complex concepts. This structure can happen within the same learning session or across multiple sessions. For example, think of scaffolding like constructing a building: you start with a solid foundation before adding subsequent floors.
Use this AI prompt: “Group this information into 3-5 modules, each covering a single main idea. Suggest clear headings and subheadings. Then, sequence the modules so that the learning starts with basic or foundational concepts, and then builds toward more complex or advanced ideas.”
Reduce friction: Processing new information already requires effort. If the presentation is cluttered, confusing or overly complex, it adds unnecessary load. Reducing friction means making content easy to scan, visually clean and intuitively structured, freeing up cognitive resources for actual learning. For example, using clear fonts, highlighting key words and limiting the number of visuals per slide can reduce extraneous cognitive load.
Use this AI prompt : “Rewrite this section for a nonexpert audience, simplifying technical terms and clarifying instructions.”
Connect to familiar language and concepts: When learners can connect new information to something they already know, it becomes easier to understand and remember. This is because the brain links new knowledge to existing networks of prior knowledge and experience, known as schemas — cognitive frameworks that help organize and interpret information. Linking learning to existing schemas reduces unnecessary mental effort and allows learners to focus on what truly matters.
Use this AI prompt: “Review the following content: [insert content or describe topic]. Suggest ways to connect this content to what learners likely already know based on their persona: [describe audience persona].
Focus on using familiar language, relevant scenarios and examples from daily experiences. Your goal is to make the content feel relatable and easy to understand by linking it to their existing knowledge and experiences.”
4. Design for attention.
Incorporate restorative breaks: Sustained attention is limited. Our brains’ prefrontal cortex (PFC) — an area especially evolved in humans that is responsible for attention, critical thinking and decision-making — easily fatigues. Without breaks, learners lose focus and learning suffers.
Restorative breaks are essential for maintaining attention, but there’s no universal formula for how long or how often they should be. The ideal break depends on many factors, such as prior knowledge, expertise, individual differences, learning modality and how much mental effort the learning requires.
Activities that evoke fascination, comfort, joy or physical movement can help restore cognitive function and reduce mental fatigue. Know your audience and adjust breaks based on their needs.
Use this AI prompt: “Review this twohour workshop agenda and recommend where to insert breaks to maximize attention based on the audience’s needs [insert description].”

Rather than overwhelming learners with more AI-generated content, we can position AI as a strategic partner: one that filters out noise, directs attention to what matters and helps build learning experiences that align with how the brain learns best.

Designing with cognitive load in mind isn’t about making learning effortless — it’s about making it purposeful. The goal should be to help ensure that learners’ limited cognitive resources are spent engaging with meaningful content, not wasted on navigating poorly designed learning experiences.
While AI can contribute to an overwhelming cognitive load, it also offers powerful tools to reduce it — if used strategically. By combining behavioral science with the strategic use of AI, we can create experiences that are not only brain friendly but also relevant, efficient and impactful.
The views reflected in this article are the views of the authors and do not necessarily reflect the views of Ernst
& Young LLP or other members of the global EY organization.
Dr. Grace Chang, associate director of behavioral science and insights at Ernst & Young LLP, is a cognitive neuroscientist specializing in learning, behavior change, and leadership development. She leverages her research and industry expertise to design strategies that drive meaningful change and explores how AIhuman interactions will shape the future of learning and work.
Dr. Karina Freitag, assistant director and senior researcher of behavioral science and insights at Ernst & Young LLP, is an organizational psychologist who applies behavioral science to design evidencebased learning and leadership solutions. Her work also explores AI-human interaction to advance workplace effectiveness and innovation. Email the authors.

• Start by identifying what needs improvement. Audit your current learning programs for cognitive load risks — too much content, poor structure or lack of engagement.
• Use a structured, research-based approach to inform design. Apply the EY Cognitive Load Learning Design Framework to guide decisions across content, structure and engagement. Adapt it as necessary, and share your own leading practices with others.
• Leverage technology to enhance efficiency and relevance. Use AI tools to streamline content, personalize learning and optimize delivery.
• Get hands-on with AI to improve specific design elements. Experiment with AI prompts to chunk content, simplify language and generate realistic practice scenarios.
• Advocate for brain-friendly learning in your organization. Promote learning design that respects attention, supports retention and drives performance.

Training Industry
Special Report:
BY TOM WHELAN, PH.D.
In today’s fast-paced workplaces, artificial intelligence (AI) is not just changing what we learn — it’s transforming how we learn.
The sentiment in the previous sentence is undeniably true, but it likely falls flat to many ears because it sounds (and looks) like it comes straight out of AI. For many of us, our encounters with “AI slop” have made us skeptical of any content that sets off alarm bells. This is a critical subtext of any discussion about the impacts of AI on learning, because it’s not all uniformly positive.
The real question isn’t whether AI has changed training content or delivery — it has, in many organizations. What matters is understanding why training needs to evolve and how you’ll measure and communicate its value. For corporate learning and development (L&D) professionals, this moment demands more than adaptation. It often calls for a reappraisal of the entire learning ecosystem, not necessarily with the intent to overhaul everything but certainly to entertain the possibility.
AI’s impact on corporate training is multifaceted, whether we would like to admit it or not. It reshapes learning needs across the organization, redefines the learner experience and introduces new challenges (and sometimes opportunities) for evaluating learning outcomes.
Organizations are moving at different speeds when it comes to AI in training but momentum is building.
By the numbers:
have an AI-enabled authoring tool
38% 14% use agentic AI solutions
25% are actively using AI in training
22% are experimenting with AI
8% do not plan to adopt AI for training
Understanding these shifts is essential for L&D leaders who want to stay ahead of the curve. Because it’s too easy to invoke a broad “AI transformation” only to have our focus narrow, to our own detriment, to courting a specific and usually limited set of concerns.
Let’s talk about three major categories of impact L&D leaders need to pay attention to in 2026 if they aren’t already:
The first and most visible impact of AI is on the nature of learning demand. As AI tools become ingrained in workflows, from marketing and finance to HR and operations, employees across all functions need new capabilities. But using AI tools aren’t straightforward technical skills that you can throw an
AI is disruptive and creates uncertainty and job insecurity
AI is viewed with skepticism and concern about its impact
Technology/Telecom, N = 230
Banking/Finance/Insurance, N = 114
AI is a neutral tool — its value depends on how it’s used
Health Care/Medical/Pharma, N = 202 Government, N = 90
AI is generally viewed positively, with some reservations
AI agents will enable e ciencies and better business processes
Durable Goods/Consumables, N = 192 Manufacturing, N = 78
eLearning module at and check off the box. Employees using these tools need deeper discernment of what they’re doing beyond the base level of, “I type a question, I get an answer.”
These knowledge and skills include things such as AI literacy: a sense of understanding what AI is, how it works and where it can be applied. It includes knowing how to effectively engage with generative AI tools as well as, perhaps most importantly, knowing how to iterate. There’s also a critical need for evaluating AI outputs for accuracy, bias and relevance — sometimes even (gasp!) checking sources. Not to mention ethical reasoning, such that employees can grapple with the responsible use of AI in decision-making. For as much as AI can accelerate and automate, employees need to be wary of when to set boundaries.
Further complicating matters is the unassailable fact that not all employees are going to use AI in the same manner or harbor the same attitudes. Not across roles, not even necessarily within roles — and the same is true of industries, as seen in Figure 1. In short, the learning needs borne of AI are absolutely more nuanced than an enterprise-wide, one-size-fits-all strategy would suggest. This means L&D must move beyond siloed technical training and toward cross-functional, rolespecific learning pathways that integrate AI into the context of real work. It also means anticipating learning needs that may not yet be fully visible, such as the ability to collaborate with AI agents or manage hybrid human-AI teams.
For learners, AI is both the subject of training itself as well as the engine behind a new generation of learning experiences. It’s easy to get this turned around, though, as learning about AI is not at all the same thing as learning through AI. Adaptive learning platforms, intelligent tutoring systems and generative content tools are starting to enable more personalized, responsive and scalable training than learners have ever experienced before.
For the employees being served such training, this can significantly change their experience of learning, regardless of their actual enthusiasm for AI (or lack thereof). For instance, using AI for personalization can tailor content, pacing and feedback to individual needs and preferences. AI-powered systems can surface relevant learning resources at the moment of need, embedded in the flow of work. Chatbots and virtual coaches can provide on-demand support, simulate scenarios and guide reflection.
These innovations promise to increase engagement, reduce time to competency and support continuous learning. However, these also require L&D teams to develop new capabilities in instructional design, data governance and platform integration. Otherwise, the avenue through which the innovations make good on their promises is closed off, or at least obstructed. And further down this continuum,
BADLY MISMANAGING THESE AIPOWERED EXPERIENCES ERODES THE CREDIBILITY OF THE TRAINING ITSELF AND THE ORGANIZATION PURPORTING TO SUPPORT IT.
Started researching but have not deployed AI-based training solutions Experimenting with AI but not using it regularly in training Actively using AI in training
Not yet explored using AI for training, but open to it
badly mismanaging these AI-powered experiences erodes the credibility of the training itself and the organization purporting to support it.
Perhaps the most quietly profound shift caused by AI is in how we think about quantifying the impact of learning. Traditional
evaluation models, such as Kirkpatrick’s four levels, remain useful but they are largely insufficient for accounting for the role of technology. AI-enabled learning demands a more dynamic, data-rich approach to impact measurement.
For instance, in addition to improving training evaluation writ large, L&D must track AI adoption rates and find means to link performance improvements to AI use. There’s also a paradoxical issue of data integration: AI systems generate
THE
ESSENTIAL
ENABLING AI’S REAL VALUE.
rich data trails. When integrated with business systems (e.g., customer relationships management and enterprise resource planning), these can reveal relationships between learning and outcomes such as productivity, innovation or customer satisfaction. Without careful integration, however, this mountain of data becomes increasingly impossible to summit.
Importantly, AI can assist in the evaluation process itself. Natural language processing can analyze qualitative feedback at scale, while machine learning models can identify patterns in learner behavior that might predict success or risk on the job. Note that the key word here is “assist,” as even the best enterprise-grade AI models are still too prone to errors and shouldn’t be blindly trusted to make decisions based on evaluative data about employees and their performance. In essence, this highly advanced plane still requires a skilled pilot.
Over the past several years, the impact of AI on corporate learning hasn’t been linear; it’s exponential. As its capabilities evolve, so too will the expectations placed on L&D. To meet this moment, L&D leaders must invest in their own AI literacy to make informed decisions about tools, strategy and ethics. They also need to collaborate across functions to align learning with business transformation efforts so that AI isn’t forced into corners where employees don’t want it. By the same token, L&D has the opportunity to experiment boldly with new learning formats, platforms and evaluation methods, moving the organization towards a learning culture that embraces curiosity and responsible innovation.
What’s often overlooked, however, is the “what will it take to get there.” The gleaming allure of AI can obscure the deep roots that must be laid before meaningful transformation can take hold. These initiatives are seldom inexpensive, and the returns are rarely immediate. They require patient investment and a long-view mindset. Framed this way, the shifts in learning needs, experiences and metrics become essential groundwork for enabling AI’s real value.
AI will never replace L&D, but it will dramatically redefine its value. The organizations that thrive will be those that treat employee learning less as a cost center to be nickeled and dimed, but as a strategic enabler of AI readiness, resilience and reinvention across the business.
Tom Whelan, Ph.D., is the director of corporate research at Training Industry, Inc. where his work focuses on learning and development research to inform best practice recommendations. Email Tom.

BY LACY THOMPSON AND JAMIE MCAVOY
When I became a first-time parent three years ago, I started with a wellmeaning, naive expectation that I would be ultimately responsible for teaching my daughter everything she needed to know. On top of that, I was convinced I would need to deliver the teaching in a structured way to best support her development.
As is obvious to anyone who has spent time teaching or raising children, this couldn’t have been further from the truth. Don’t get me wrong; my husband and I still do our fair share of instruction (we are both educators, after all). But most of my daughter’s best learning
TODAY’S AI TOOLS MAKE IT POSSIBLE TO RETURN TO HIGHPRODUCTION VALUE CONTENT, BUT WITH SIGNIFICANTLY STREAMLINED PRODUCTION METHODS.
seems to come from eavesdropping on adult conversations, bedtime storybooks or YouTube videos — never a dedicated “lesson plan.”
Adult learners have different needs than toddlers to be sure, but this experience with my daughter reinvigorated my love for storytelling and my deep appreciation of how effective narrativebased learning can be.
Before we explore how artificial intelligence (AI) can help here, let’s ground ourselves in why narrative matters for learning. Here are three design anchors I rely on when I tap into storytelling for learning programs.
1: STORIES BUILD MENTAL MAPS LEARNERS CAN APPLY LATER
When information is wrapped in a story, learners do more than memorize facts, they build a picture of what is happening, who wants what and why steps unfold in a certain order. That “situation map ” makes details easier to recall and apply when the context changes.
2: STORIES LIGHTEN THE MENTAL
Working memory is limited. A coherent narrative connects the dots for the learner so they spend less effort trying to figure out how pieces fit and more effort building understanding that
sticks. I like to do this by replacing bullet fragments with short anecdotes that show cause and effect. I also try to keep visuals and text tightly aligned to reduce mental clutter and free up capacity for real learning.
3: STORIES SPARK CURIOSITY THAT DRIVES MEMORY
Good stories create questions: What happens next? Why did that choice backfire? That tiny itch of curiosity sharpens attention, and moments of surprise help lock in what follows. I’ve found that a quick cliffhanger before providing a rule or principle often sees stronger retention than with information dumping. The key to capturing curiosity is demonstrating the consequences. That’s where the learning happens.
In the training world, telling great stories can feel impractical. Learning designers may struggle to find the right message and voice to land on a story that resonates with learners. Maybe the story drifts from what learners care about or it reads like marketing copy instead of real work. Even when the message lands, it’s difficult to bring it to life in a compelling way. Visuals and multimedia assets take time and specialized skills (backgrounds, characters, timing, alt text), and in a rush to get training out quickly, often the story gets trimmed back to stock assets and loses its punch.
The team poured weeks into bespoke visuals and long scripts, yet learner surveys flagged the modules as forgettable and participation lagged. Completion was low, replays were rare and learner experience metrics suffered. They needed a way to improve the experience — not just make more assets.
Truly creative, media-rich training programs are cost and time prohibitive for many organizations. About a decade ago, I had the opportunity to work with a client creating a sales training video series shot on the set of NBC’s The Voice. It was as incredible as you might imagine, a project with all the bells and whistles: professional actors, full film crew and months of scripting, editing and pre-production. The outcome was outstanding but impossible to replicate for other clients, especially as budgets get tighter and organizations prioritize more cost-effective methods over the big splash.
Now the game has changed. Today’s AI tools make it possible to return to high-production value content but with significantly streamlined production methods. We can go beyond talking head avatars to fully customized short films — scene-by-scene stories with on-screen action, dialogue and stunning cinematography that feels cinematic and is in line with consumergrade expectations. And we can do it faster and cheaper than ever before.
But where to begin? Not every team is ready to deploy agents or rebuild workflows, and that’s okay. With tools
Adopt a “story-first, assemblylight” workflow: run a story pass on existing outlines, implement decision points in the videos, and aim to hit the first decision by ~20 seconds. Use AI to generate short, on-brand and cinematic sequences instead of static stock.
Learner experience rating: +0.8 (on a 5-point scale)
Voluntary replays: 2.1X
you likely already have (ChatGPT, Microsoft Copilot, etc.) you can start small and turn flat outlines into storydriven practice, leveraging AI to speed up the old work of scripting, branching and asset creation while you spend more time on narrative craft. Use large language models (LLMs) to craft fast hooks and micro-stories that reach the first choice by the 20-second mark. Pose a “what happens next?” question, reveal consequences and keep the tone conversational and globally translatable.
Imagine opening an old subject matter expert (SME) outline that has bullet fragments and policy notes and dropping it into an LLM with a few simple prompts. In minutes, that flat outline comes back as a short scene with a clear goal, a snag or a decision point for the learner.
Here’s what that transformation might look like:
• Flat: “Watch this video of a coaching conversation.”
• Story: “Your rep missed target and is defensive — open with one question that lowers heat and surfaces causes.”
• Prompt: “Produce three openings that reduce defensiveness (empathy, curiosity, data). For each, give a likely response and a one-line debrief learners can remember.”
• Flat: “Follow this standard operating procedure to submit an invoice.”
• Story: “You are trying to help the invoice ‘get home.’ Pick the next step that keeps it moving.”
• Prompt: “Generate three branches that expose common mistakes without changing the facts. Label the misconception each branch reveals and add a plain language debrief.”
There are many ways you can point AI at the assembly phase so the narrative survives. I’ve seen success with supporting the development of on-brand backgrounds, generating consistent character poses, drafting alt text that truly describes what’s on screen and introductions and transitions. You keep the emotional beats and the debriefs; AI keeps the production treadmill moving. This is where the speed-up matters — not as the headline, but as
the enabler of a more engaging, story-first experience.
AI can also help us move beyond passive “learning widgets” and into human-sounding two-way experiences that breathe new life into practice and role-play. The key is how well you train the system to speak like a person your learners trust.
In simulated dialogues, awkward roleplays give way to a partner that listens, pushes back and debriefs in plain language, whenever people need it. Trained on your personas, goals and tone, it becomes a believable counterpart that scores performance against clear criteria and coaches toward better decisions.
In the flow of work, AI will keep information from dying in folders and binders. An AI agent inside everyday tools surfaces a quick example, a policy-aware tip or the
THE KEY TO CAPTURING CURIOSITY IS DEMONSTRATING THE CONSEQUENCES. THAT’S WHERE THE LEARNING HAPPENS.
next best action — short, specific and in your voice — so guidance feels like a trusted colleague at the exact moment it matters.
LOOKING AHEAD: WHAT WILL YOU DO?
Story is the engine; AI is the turbo. You do not need a new platform or a moonshot to make a difference. Use the tools you already have to turn flat content into narrative-based training that is as effective as it is memorable.
Lacy Thompson, chief learning officer at Unboxed Training and Technology, partners with leaders from top companies across industries to create dynamic, integrated learning ecosystems.
Jamie McAvoy is a senior learning experience designer at Unboxed Training and Technology. He’s an industrial/organizational psychologist, applying data- and science-backed methods to drive learning and business outcomes. Email the authors.

BY DR. MELISSA L. BROWN
The artificial intelligence (AI) training happening right now in your organization likely has no curriculum, no approval process and no learning and development (L&D) involvement. Employees are teaching themselves from YouTube tutorials, Slack channels, trial and error and something they glimpsed on Reddit. And it’s producing real consequences that are only beginning to surface.
Three-quarters of employees now use AI at work, most without training, many on personal accounts and almost all without L&D involvement. This shadow innovation creates real solutions alongside security exposures, capability gaps and competitive disadvantages that remain invisible until something breaks.
Security risks show up first. A manager learns from a forum post that ChatGPT can “clean up” memos and pastes in a draft client communication, sending privileged information into a public model. A sales rep discovers a prompt that generates proposals in minutes and starts uploading customer data without thinking twice about where it goes. An HR specialist feeds performance review notes into a public AI tool to help draft difficult conversations.
Fifty-seven percent of workers admit to entering sensitive information into AI tools, and most have no idea what happens to that data once they hit submit.
Capability gaps develop more quietly. Early adopters become AI power users while colleagues fall progressively behind.
One analyst completes market research in hours using AI while a teammate spends days manually building spreadsheets. A customer service specialist resolves tickets twice as fast as peers. These gaps go unnoticed until they’ve widened beyond easy correction.
Struggling users stay invisible. They don’t show up as “needs training.” They appear as people who “prefer the old way” while colleagues automate the same work in half the time. Without deliberately watching for warning signs, these employees will fall behind until the gap becomes a performance problem.
You can’t effectively train people on something they’ve been doing independently for six months or two years. You can, however, help them get better results without the hidden risks.
The coaching moments that matter happen when someone receives a plausible but incorrect answer and can’t tell if it’s right or when a team debates whether to include client data in a prompt. These situations need judgment under uncertainty, which is what L&D infrastructure exists to support.
Start with your early adopters. They’re already your best teachers. Find out what they’ve discovered, help them share insights with peers and pay attention to their mistakes because failures reveal risk patterns worth tracking.
Once you understand what’s happening, build coaching into existing work rather than creating separate training events. When someone shares an AI-generated analysis during a meeting, reinforce verification practices right then. When a project team discusses incorporating AI into deliverables, walk through appropriate boundaries in the moment. Context-driven coaching creates better habits than formal sessions scheduled months later.
THREEQUARTERS OF EMPLOYEES NOW USE AI AT WORK, MOST WITHOUT TRAINING, MANY ON PERSONAL ACCOUNTS AND ALMOST ALL WITHOUT L&D INVOLVEMENT.
Traditional training cycles are too slow for this. By the time you finish building a module, employees have already moved on to the next use case. Instead, track the questions that keep coming up: How do I fact-check this? When should I tell someone AI wrote the first draft? How do I know if the quality is good enough? Answer those questions quickly,
in whatever format gets people the information fastest.
Focus on pattern recognition over tool training. Help people recognize when AI accelerates work versus when it creates risk. Teach them to spot hallucinations, know when human expertise is essential and understand the difference between AI assistance and dependency. This judgment transfers across tools and stays relevant as technology evolves.
Finally, make expert review easy to access. People need simple ways to get human validation for AI-assisted work without formal approval chains. The goal is to make “can you take a quick look at this?” a natural workflow step rather than an administrative burden.
AI capabilities evolve at a pace traditional learning cycles cannot match, and L&D teams that can’t adapt risk becoming irrelevant. While we’re building compliance modules, executives may already be turning elsewhere for strategic guidance on AI capability development.
Build regular feedback loops that show you how AI use is actually changing across teams. Skip the detailed policy manual and focus on flexible guidelines based on risk, not restriction. Anything rigid will be obsolete in six months anyway. The firms getting real results from AI aren’t just rolling out new tools.
They’re redesigning work to make it safe for people to experiment and learn as they go. That shift needs L&D to lead it, not just document what happened.
The choice is ours. We can lead the capability development that shapes how AI changes work, or we can document what happened after others made the critical decisions.
L&D has a rare opportunity to shape one of the most significant workplace transformations in decades, but only if we abandon the illusion of control.
Your people are already learning AI through daily use. The question we face now is whether we’ll guide people while they’re forming habits or show up six months later with compliance training that addresses yesterday’s concerns.
When people have access to tools that make their work easier, they’ll use them. What’s missing from self-directed learning is the judgment about appropriate use, context for understanding limitations and support for making sound decisions when the right answer isn’t obvious.
Training is happening whether we participate or not. If we guide it thoughtfully, we shape how work gets done. If we document it afterward, we’re just capturing how work used to get done. Only one of those paths creates lasting value.
THE CHOICE IS OURS. WE CAN LEAD THE CAPABILITY DEVELOPMENT THAT SHAPES HOW AI CHANGES WORK, OR WE CAN DOCUMENT WHAT HAPPENED AFTER OTHERS MADE THE CRITICAL DECISIONS.
Dr. Melissa L. Brown holds an Ed.D. in Organizational Leadership. Her dissertation, The AI Innovator’s Dilemma, examined how professional services organizations navigate AI transformation when employees innovate faster than formal structures can adapt. Email Melissa.
When employees learn AI without organizational support, three things happen.
1. Security exposure becomes impossible to measure. Personal accounts, public models and data leave the building without anyone tracking where it goes.
2. Capability gaps compound over time. Early adopters get exponentially more productive while others fall behind, creating performance disparities that look like individual failures rather than what they actually are: systemic learning gaps.
3. Innovation gets lost. The best AI use cases your organization will discover are happening right now in shadow experiments. Without L&D capturing and scaling what works, those innovations stay siloed with individual users instead of becoming organizational capabilities.
Support employee-led AI learning now, while you can still shape how it develops.
Tune in to Training Industry’s award-winning podcast to hear conversations with industry experts and thought leaders on trending topics such as:
• How to tackle common training challenges.
• Training measurement and evaluation.
• Leadership coaching and development.
• Strategic alignment and planning.
• Learning technologies and tools.
• And much more!
Enjoying the podcast? Let us know! Leave a review on your favorite podcast app.
Have an idea for a future episode, or would you like to be considered as a guest speaker? Email us at editor@trainingindustry.com



DR. KRISTAL WALKER, CPTM
Artificial Intelligence (AI) is impressive. It enables learning and development (L&D) practitioners to create courses in seconds, complete with well-crafted outlines, objectives, knowledge checks, metrics, job aids and other essential training elements. From a technical perspective, AI can become a trainer’s superpower.
But we know there is so much more to making an impact in training than an impressive design structure. Behind every training experience is a person with unique motivations, experiences and emotions that no algorithm can fully comprehend. While AI can personalize what people see, it cannot personalize what people feel.
Here’s our opportunity: We must reclaim the human role in AI-assisted learning.
Efficiency has always been a priority in the world of L&D. However, when we treat efficiency strictly as a transactional experience to meet deadlines, we miss the opportunity for meaningful learning moments.
Creating content faster does not necessarily mean it is of higher quality. For example, AI can generate case studies for your training, but the output could potentially reinforce stereotypes. AI can provide automated feedback, but the responses generated could be sterile or dismissive.
When you facilitate a live instructorled session and one of your learners is struggling, it’s you who can most effectively discern these variables and pivot accordingly. Efficiency certainly has its place in L&D, and AI can support it; however, we should question whether efficiency alone is the goal.
Despite the growing concern that AI may replace humans, the reality is that it cannot. If we are not intentional about remaining curious, present and engaged in our roles, we risk losing the critical thinking that makes our work valuable.
The good news is that there are practical ways to sustain our humanity in the workplace learning environment. Here are three methods:
Allow yourself to feel or experience what the learner may be feeling or experiencing. Ask yourself:
• Does this activity make sense for someone who’s overwhelmed or not familiar with this content at all?
• Is the instruction clear?
• Is the tone supportive or offensive?
• Have I considered the emotional impact of training lessons, scenarios or feedback messaging?
AI may help identify patterns, but empathy enables us to consider different perspectives so that learners connect with the training.
Incorporate reflective activities into training to give learners time to internalize what’s being taught. For example, they may need time to ask themselves:
• Is this something I experience in reality?
• What would I do in a similar situation?
• How might I respond if someone close to me had to navigate that experience?

A simple journal prompt followed by a well-facilitated discussion help keep the learning grounded and helps learners apply the training to their own lived experiences.
When we’re under pressure, we can fall into the trap of creating systems for convenience. But if we want our training to have its intended impact, we must be intentional. A few questions you might ask yourself:
• If I removed the time pressure, would I design this training differently?
• Is there any part of this training where I am over relying on AI?
• Does this content reflect how people actually learn and work, or does it feel generic?
• Where can I add a human touch that AI would not naturally incorporate?
Ultimately, our focus should always be on upholding the integrity of our craft, even in the face of pressure from stakeholders.
AI is a powerful tool in workplace learning, but it’s just a tool. The future of great learning experiences belongs to leaders who understand how to blend efficiency with intention and technology with humanity.
Dr. Kristal Walker, CPTM, SHRM-CP, is the vice president of learning and talent development at Sweetwater Sound. Kristal is also a facilitator for Training Industry’s Certified Professional in Training Management (CPTM) program. Email Kristal.
Hosted by international thought leaders, Training Industry webinars provide timely and strategic information about the business of learning. Stay up to date on the latest trends to enhance your training strategies. Ready to leap into a galaxy of knowledge? Explore free webinars and on-demand content at www.trainingindustry.com/webinars.

Whether you love it, hate it or are somewhere in between, it’s clear that artificial intelligence (AI) is changing the way we work. And it isn’t just the case for technical work; leaders find themselves at the forefront of AI adoption and implementation in a variety of ways.
Leaders are AI users: We see leaders using AI tools to help with their communication to their teams, help them manage their emails and as a development tool in dealing with their day-to-day challenges. Building familiarity with these tools and how they work early on can give you the confidence to support your team in doing the same.
Leaders are the face of change: Often, we are the ones informing our teams and getting their buy-in on any AI tools or processes our organizations want to take advantage of. In some cases, this can be a difficult task. We’ve all been in conversations where even the mention of AI is met with grumbling, anxiety and even denial.
We’ve also seen the other side, team members who can’t wait to get their hands on the newest tools and start innovating. Guiding your team through this range of initial reactions is complex. It requires you to build trust, set expectations, practice empathy and communicate clearly, giving your people what they need when they need it.
Leaders are builders of tomorrow’s foundation: AI is already changing the skills we need in the workforce today,
and we can expect even more change in the future. As leaders, we play a critical role in preparing our teams and organizations for that future. It’s important to know what those skills are and how to develop them. So, let’s focus there.
At a recent Frontiers of Business Conference, AI took center stage on the panel “Bridging the Skills Gap: Aligning Workforce Skills with Labor Market Demands.” Throughout the event’s presentations and conversations, the picture became clear: There is a serious skills gap in today’s workforce, and it will only grow if we don’t take action now.
Here are the skills you need to build on your teams.
• Communication: As we increasingly rely on technology, it is human skills that will set top performers apart. Effective communication leads to fewer mistakes, more efficiency and higher engagement.
• Problem-solving: The problems we face in the workforce will only grow more complex. AI often lacks the context and creativity that real people bring to the table. It’s not just about “knowing your stuff” — it’s about knowing how it applies and why it matters.
• Adaptability: We are entering a new era of technology that allows for rapid change at a pace we’ve never seen before. The teams that will perform the best are the ones that can adapt and pivot quickly in response.
• Discernment: If you’ve spent enough time with AI, you know that its “intelligence” only goes so far. Without expertise, people can take misinformation from AI and run with it. This can lead to embarrassment or disaster depending on the stakes. This is why critical thinking and expertise in your field will continue to be necessary even if AI takes over some of the technical tasks.
WITH THESE TOOLS AND HOW THEY WORK EARLY ON CAN GIVE YOU THE CONFIDENCE TO SUPPORT YOUR TEAM IN DOING THE SAME.
Assess these skills within your team. What level of ability are they currently demonstrating? If you’re seeing a gap, that needs to be a focus of development moving forward.
As leaders, we can influence how our people experience this major change in the workplace. You can set your team up for success by being proactive and equipping them for the future of work.
Marshall Goldsmith is the world authority in helping successful leaders get even better. Suzie Bishop is the vice president of product development at The Center for Leadership Studies Email Marshall and Suzie.


Read the reviews of learning and development (L&D) professionals who achieved their career goals with Training Industry’s Certified Professional in Training Management (CPTM™) program.
“I am new in the role, as the first plant training manager for a national food manufacturing brand. The knowledge and tools I have gained during the process of acquiring my CPTM [credential] have provided me with both the framework and confidence to structure and develop a high-performing training and development program that is strategically aligned with our operational objectives.”
— Dave Stewart, CPTM, plant training manager, McKee Foods
“I don’t have the traditional educational background that most professionals do but have an extensive training background from my time in the military. For five years, I was having a difficult time overcoming the educational gap and translating my military experience into a training position. I’m proud to say that only two months after obtaining my CPTM certification, I was offered and accepted a training position!”
— Wade Watson, CPTM, supply chain operations training and development specialist, SCA
Your career goals are waiting! Get started on your path to L&D success with the CPTM program.
“I had been in the Training industry for over 20 years when I signed up and was absolutely floored by how much I was learning. So many of my experiences had no name before I took the CPTM and now they do. I could not find a better use of my time than this program, period.”
— Jeff Emanuelli, CPTM, vice president of people management and development, SMBC MANYUBANK
JD DILLON
Approval.
That’s one thing most training has in common. Subject matter experts (SMEs) add requirements, stakeholders sign off and legal gives their blessing.
Often, it doesn’t even matter if the final solution addresses the problem or not; at least the content was risk-free!
But in the artificial intelligence (AI) era, when every employee gets their own version of content, managing risk requires a different approach to governance.
WE MUST FIRST ACKNOWLEDGE AN IMPORTANT FACT — L&D DOESN’T OWN WORKPLACE LEARNING.
We’ve talked about personalized learning for years — moving beyond one-size-fits-none training and adapting to individual needs.
AI makes this a reality. Platforms can match people with the right support at the right moment using data from across the workplace. Employees struggling with specific skills automatically receive refreshers while managers are nudged to provide feedback. But personalization is just the beginning.
AI can now take source material and generate unique content for each employee. These tools can determine the right focus, format, language and reading level to suit every worker. A single standard operating procedure (SOP) can instantly become a podcast, step-by-step diagram, reinforcement scenario and interactive coaching session — all with just a few clicks.
Technology is accelerating the shift from personalized to bespoke learning: assets
tailored to the individual at scale. It’s also changing the nature of governance. After all, you can’t review every version of AI-generated information moving through your organization.
Does this mean AI is putting your business at risk?
We’re comforted by the notion that all learning and development (L&D) content is “approved.” Each course goes through a review cycle and, as a result, the slides are the right shade of blue and every word aligns with company nomenclature. But how much of the information available to employees actually gets this level of scrutiny?
The truth: very little. Most learning happens outside the official channels. People swap knowledge in email, Teams messages and break room conversations. None of that gets brokered, branded or blessed. Yet, this information drives everyday performance.
This doesn’t mean we should acquiesce control to AI. Moving from static documentation to a dynamic knowledge ecosystem is a big shift. To make this leap, we must first acknowledge an important fact — L&D doesn’t own workplace learning.
To unlock AI’s full potential, we must loosen our grip. We must redraw the line between corporate information control and a free-flowing, AI-enabled knowledge ecosystem.
Not every topic carries the same level of risk. Customer service skills are much less rigid than financial disclosures and medication handling protocols. Work with key partners to define sensitivity levels for critical subjects. Determine clear

boundaries for how content is created, reviewed and shared across every format, including AI-generated materials.
Documenting the rules is only the first step. People still need the awareness and capability to apply them effectively, especially when leaning on AI-generated guidance. L&D plays a vital role in helping employees understand the rules, recognize risk and make sound decisions. Mistakes will happen, but we can’t blame the technology any more than we’d blame a co-worker for bad advice.
AI can also help manage this balance. By embedding guardrails within AI tools, organizations can guide how information is crafted and ensure outputs stay within defined boundaries. Review loops can automatically flag content that deviates from policy, exceeds confidence thresholds or references unverified sources before it ever reaches employees. In this way, AI becomes both a creator and a quality control partner.
Trust will shape the evolution of L&D. Bespoke support is now within reach. We finally have the tools to give every employee the help they need, when they need it. The rules aren’t going away (and they shouldn’t), but that can’t stop us from advancing our function. We must leverage AI to reshape governance so it keeps pace with innovation instead of holding it back.
JD Dillon is a respected L&D professional, keynote speaker and author of The Modern Learning Ecosystem. After 25 years leading operations and talent development with companies like Disney, AMC, Kaplan and Axonify, he partners with organizations to reimagine enablement strategy, sharpen go-to-market execution and architect AI-powered ecosystems through LearnGeek. Email JD.
In learning and development (L&D), some projects take on a life of their own, especially when large portions of the organization are impacted and there are high levels of visibility. In those instances, the journey from ideation to presentation can feel daunting.
In 2025, my team, which provides training content to a segment of Avantor’s commercial and technical staff, took on several of these projects, and to help with the process, we turned to artificial intelligence (AI), especially Microsoft Copilot*. Doing so often resulted in a better than 25% reduction in our typical project cycle time from past years.
We used Copilot for a variety of tasks throughout the course of our projects, and our collaboration process generally followed this workflow:
1. Ask Copilot for help with clarity. We started with natural, conversational prompts, but quickly learned the importance of detailed, structured instructions to minimize back-and-forth.
2. Inspect the outputs Copilot offered. Thorough prompts yielded impressive results, but we always closely inspected the outputs – and questioned as needed, asking Copilot for explanations and deciding as a team whether to adopt or adapt its ideas.
BY JOHN BATTS
3. Modify and iterate with Copilot. We frequently re-uploaded amended documents and requested further review, enhancing our content through iterative collaboration.
THOROUGH PROMPTS YIELDED IMPRESSIVE RESULTS, BUT WE ALWAYS CLOSELY INSPECTED THE OUTPUTS — AND QUESTIONED AS NEEDED.
We found this process worked well with a wide variety of tasks, as one of our team members shared: “Following our workflow process, I was able to collaborate with Copilot for image generation and visual layout input; to check for consistency of wording and tone in slide deck content; to help break down more complex content into digestible points; to add interactive and multimedia elements to the collateral; and to brainstorm ideas for follow-up exercises.”
Sometimes, it only took a single iteration with Copilot to get the final output we
were happy with; other times, it took several. Regardless, we looked at our interactions with Copilot as collaborative and let the process unfold as we went.
Here are some of the most impactful ways that Copilot influenced the development of our projects:
1. INITIAL RESEARCH
One of the most significant early wins with Copilot was felt during our research phases. While our team often knew the general direction we wanted to take with each project, there were often holes in our knowledge that needed plugging.
Copilot was invaluable during our research steps, quickly summarizing vast background content and clarifying complex concepts. That helped us build a solid knowledge foundation on which we could build our training content.
Note: When using AI as a research partner, we regularly cross-checked Copilot’s summaries with other trusted resources. When we had questions, we would thoroughly review the references it used as its sources. This step is critical for any team using AI as a collaborator.
2. CONTENT REVIEW
Throughout the projects, we found Copilot’s content review capabilities to be very helpful. For example, we uploaded each of our project roadmaps
and prompted: “Check for flow and consistency — are any concepts missing or unnecessarily repeated?” Copilot quickly flagged redundant content, suggested tighter sequencing and provided a clear explanation for its suggestions that helped us more easily review its output and make final adjustments to our collateral.
When creating each of the slide decks for our projects, we were able to upload those to Copilot accompanied by a review prompt such as: “Review this updated slide deck and let us know if there are additional adjustments you would recommend for clarity and strength of message as well as overall flow.”
Within seconds, Copilot would return a solid summary of the content, highlight the areas that were done well and suggest ways we could tighten the flow and improve consistency. It even suggested tweaks in areas like font sizing, colors and graphics used throughout.
Once we had the first-pass version of our slide decks completed, we continued collaborating with Copilot to create interactive segments: things like group discussion points, exercises we could facilitate for applying concepts and even role-play activities. As we amended our content based on Copilot’s suggestions, we would upload the latest version for further insights, which enhanced the collaborative feel of our interactions.
We had several “ah-ha” moments throughout the process, especially regarding ways the learning process could be enhanced with Copilot.
For instance, we often wanted to leverage video content to support the concepts covered. We used the prompt, “Suggest three-minute, safefor-work, memorable clips for this topic,” and Copilot yielded instant, relevant suggestions with timestamps, saving us hours in background research.
Another moment was when we realized the power of Copilot to help participants with role-play interactivity. Copilot’s ability to simulate customer personas
and provide feedback on role-plays gave participants a safe, effective way to practice the conversations they would ultimately have with their actual customers when talking about our products and services. When asked for feedback on the role-play interaction with Copilot, one participant rated it as a five-star experience and said practicing with Copilot helped him feel confident to “schedule a presentation with decision-makers.”
COPILOT WAS INVALUABLE DURING OUR RESEARCH STEPS, QUICKLY SUMMARIZING VAST BACKGROUND CONTENT AND CLARIFYING COMPLEX CONCEPTS.
Worth mentioning are two additional ways Copilot helped our team.
First, it helped with making content more memorable. Here’s one way we prompted Copilot: “We’re looking for an acronym to describe the content of our presentation. Any thoughts?” Copilot returned three solid options, one of which we incorporated into our material. We even used Copilot to help produce supportive imagery.
Secondly, it helped us create participant collateral. We found the ideas Copilot suggested for collateral were often fantastic, and we amended and adopted many for inclusion in what we provided to each participant. One of the specific ways we frequently collaborated with Copilot was in the creation of high-level summaries of core learning principles that participants
could use for easy reference following our training sessions.
Using Copilot as a collaborator was eyeopening. Its quick, thorough feedback accelerated our training content development and helped ensure our content was polished before sharing with colleagues.
As mentioned above, to help get the best results from an AI tool like Copilot, we followed a standard workflow using the acronym “AIM”:
• A – Ask with clarity: We found through trial and error how important it is to craft detailed, structured prompts to guide AI effectively.
• I – Inspect outputs: To ensure we were providing the best content possible for our colleagues, we reviewed Copilot’s suggestions critically and fact-checked for accuracy.
• M – Modify and iterate: We didn’t limit the number of times we would interact with Copilot on a given topic; rather, we would refine the content based upon its suggestions and then often upload those revisions and keep improving.
Throughout the past year, the process of collaborating with Copilot to develop content helped us recognize what a tremendous help it can be for even complex projects. Each member of our team has come to depend upon Copilot for things we do every day, and we are confident that virtually anyone in the L&D space could benefit in similar ways as well.
*Microsoft and Copilot are trademarks of the Microsoft group of companies.*
John Batts has worked in an L&D capacity for much of his professional career. Currently, he serves as a technical training manager with Avantor where he and his team regularly provide training content to a segment of Avantor’s commercial and technical staff. Email John.
Congrats to these graduates from the same company!
3M
Georgia Sandahl
Gursharan Kaur
DFPS Texas
Emily Hughes
Karola Brookshire
KIPP Foundation
Erika Hunt Dewalt
Philonda Grant
ABIM Isafiade
Ahmed Saad Hassan Misr Life Insurance
Aida Margarita Manrique Romero VISA
Alex Miranda Behavior Analyst Certification Board
Alexander Cervantes Keyport
Alisa Arthur Grand Bahama Power Company
Andreas Skensved Ambu A/S
Mantech
Jack Zollo
Melissa Ritter
NUWC Keyport
Jessica Mohr
Leslie Lomeli
USAA
Allison Bendas
Amber Aronow
Karen Keeler
V2X
Adam Turk
Alexandre Souza
Jonathan Cales Meagan Chatha Pénélope Villemure-Forcier
Sandra Piperni
Veterans Administration
Angela Webb
Laura Gomez
Andrew Christoffersen TWE Group
Angela Nelson Marmon Utility
Anitra Starks Department of Children & Families
Ariel Schenk Celebree School
Austin Reese CB Sales and Service
Bobbi Marquardt Mission Rock Residential
Brian Geschke HairClub
Callie Schulist Telaid Industries
Cameron Spencer Amcor Flexibles North America
Cara McComb Amazon
Carolyn Millender Dees Trenholm State Community College
Carrie Madden Methodist Le Bonheur Healthcare
Catherine Forbes Apple Inc.
Chad Deverman Zoox
Chad Kisner ADM
Christian Bentley Sequoia Financial Group
Ciera Fleming
Cristina Voudouri Seed Minds
Dana Boise-Mancini PLI
Daniel Dowd Universal Studios Hollywood
Danielle Lind Protolabs
David Jacobs Defense Security Cooperation University
Derek Wester Colbea Enterprises LLC
Devender Chauhan Nestle India Limited
Donisha Young Standard Logistics
Donna Sideris Arizona State
Emily Work APi Group
Emma Hamill Training Industry, Inc.
Emmanuel Arko-Larbi Paramedic Aid Co. Ltd.
Erika Hlywiak Clerk of the Circuit Court & Comptroller, PBC
Gabriela Phillips American Global
Gabriella SOOS NSPA
Grant Fernstrum Thermo Fisher Scientific
Guilherme Lopes Silveira Maersk Training
Heather Finkey Pitch Whistle
Heather Gudelis Banff Jasper Collection by Pursuit
Iiae Braham Penske
Jaqualious Carroll IronBow Technologies
Jasmine Robinson Country Bank
Jason McHenry Ntiva
Jeff Satterwhite Delta Dental of Colorado
Jeffrey Tingler Owens & Minor
Jenna Langel Imagineering Finishing Technologies
Jerry Tambiling Rome Health
John Marullo Pinterest
Julianna Radzinski Sustainable Solutions Corporation
Kathleen O'Rourke WOWorks
Kayli Burkett First Federal Bank
Kenneth Thrasher HII
Kortnee Morris MOREgroup Inc.
Kristin Treier Dexis Consulting Group
Laurel Wise K2 Electric
Lauren Schneider Premium Blend Consulting
Lloyd Crane Creekstone Farms Premium Beef LLC
Malissa Purser Port Gamble S'Klallam Tribe
Marie McIntyre Pittsburgh Water & Sewer Authority
Marq Rogers North Carolina A&T State University
Martin Calafell Patterson UTI
Megan Deabler Landmarc Real Estate
Meghan Johnson Mahmee
Melissa Rohloff USTRANSCOM
Michael Clark eMoney

Michael Gonzales CS WIND America
Michael Meier Thales Global Services
Michael Metzler Google
Michele Olivieri Children's Hospital of Philadelphia
Michelle Denny Harborstone Credit Union
Michelle Maddigatla National Association of Insurance Commissioners (NAIC)
Mika Perkins Lenbrook
Nicole Fillip AccentCare
Nicole Whitener TRF Bangor
Noah LaClair Ali Forney Center
Olivia Porter
Pamela Lugar Lugar Plastics LLC
Patrick Millard
Lawrence Livermore National Laboratory
Peter Agostino Youth Guidance
Ramzy Dasuqi C&L Ward
Rasha Alhabashi International Maritime Industries
Rebecca Shorter-Kliethermes City of Lakeland
Ryan Sked OceanFirst Bank
Ryene Hofmann Diageo
Samanth Emery University of Illinois Foundation
Samantha Seldon McKesson
Sandra Barrett American Arbitration Association
Saymara Ramos
Sean Sverson Baxter
Seth Turner QUES
Shayna Trujillo American Institutes for Research
Sheila Bryant IGS Energy
Sherman Coleman Routeware Inc.
Stacey Villenurve Ascension Parish Government
Stephanie Bover Imedview, Inc.
Stephanie Giese Virbac Animal Health
Susan Gordon Kennebec Behavioral Health
Tamika Stewart General Motors
Tania Ruffin Samaritan Daytop Village, Inc.
Teresa Reese Bank of Bird in Hand
Theresa Rodriguez Corewell Health
Thomas Smith Goodwill Southeast Georgia
Tiffany Humphryes Learning on the Loose
Timothy Taylor Broadcom Inc.
Titus Reynolds Rogue Credit Union
Trish Bateman Lawley Insurance LLC
Tyler Morris IL T&TA Center
Veronique Forbes-King H. Lavity Stoutt Community College
Whitney Woodring Cushman & Wakefield
Will McMillan Redpath Mining Inc
Your accomplishment places you amongst an elite group of learning and development professionals. We cannot wait to see how you will lead the change!
Visit trainingindustry.com/cptm to learn more about how you can earn the CPTM credential and join over 2,000 CPTM graduates.
Many learners struggle with fragmented training experiences, which can lead to frustration and disengagement. After surveying over 300 learning and development (L&D) professionals, Training Industry Research found that over two-thirds said they rely on integrating multiple technologies to deliver training effectively, while only 29% currently use an all-in-one solution.
Navigating multiple platforms often means learners must manage disconnected systems and scattered content, making it difficult to create a seamless experience. Learning Pool, a global learning technology provider, is working to change that by helping more organizations achieve the promise of a comprehensive solution.
Through its recent acquisition of learning management system (LMS) provider WorkRamp in October 2025, closely followed by its acquisition of Elucidat, a cloud-based authoring tool, later that month, Learning Pool aims to simplify learning and create an end-to-end learning ecosystem. “Our mission is to unlock the unlimited potential of our customers’ greatest assets: their people,” says Benoit De La Tour, CEO of Learning Pool.
In terms of Learning Pool’s acquisition strategy, both WorkRamp and Elucidat set themselves apart in the market for a few key reasons.
WorkRamp is designed for the midmarket and for use by specific teams or departments within large enterprises. “WorkRamp is the natural extension of Learning Pool’s heritage into the midmarket,” De La Tour says, adding that its user-friendly experience and quick
BY SARAH GALLO, CPTM
implementation were other factors behind the acquisition.
And Elucidat stood out for its “deep innovation and scalable content creation and artificial intelligence (AI)-powered authoring,” De La Tour says. Elucidat Author supports traditional course content creation, and Elucidat Create was intentionally built with AI-driven capabilities, such as branching content, text-to-speech and more.
“Our customers use a variety of authoring tools today, and by bringing the Elucidat team into the Learning Pool family, we will tighten our integrations and improve the experience for all of our customers,” De La Tour shares.
Customers will see the earliest benefits of these acquisitions through a more “seamless and scalable learning journey,” De La Tour says. This includes shared strengths across the learning ecosystem, including WorkRamp customers gaining access to the scalability of Learning Pool’s content and data suite, and Learning Pool customers having access to AI-powered authoring through Elucidat.
LEARNING POOL AIMS TO SIMPLIFY LEARNING AND CREATE AN ENDTO-END LEARNING ECOSYSTEM.
These acquisitions will help expand Learning Pool’s market reach across two key areas: First, WorkRamp “provides a strong foundation and a proven track record in the U.S. mid-market, significantly bolstering our North American footprint,” De La Tour says. Second, WorkRamp’s expertise in extended enterprise and
customer enablement will boost Learning Pool’s ability to support external training initiatives, while Elucidat’s global reputation for supporting large-scale learning teams strengthens Learning Pool’s offering for global enterprise content strategy. “Critically, the combined ecosystem is built on a scalable architecture ready for worldwide deployment.”
Currently, Learning Pool is prioritizing a “structured integration that ensures continuity and amplifies joint innovation,” De La Tour says. Specifically, he shares, a deeper integration — powered by AI and automation — is already underway to unify authoring, content deployment and the learner experience.
Looking ahead, De La Tour says, “Our strategic goal remains clear: to solidify our position as the leader in learning technology by building the most complete, flexible and user-friendly learning ecosystem in the market.”
The acquisitions of WorkRamp and Elucidat contribute to this goal in five key ways: by delivering a true end-to-end ecosystem; by strengthening Learning Pool’s suite of AI-native tools; by enabling faster content creation, delivery and tracking; by expanding the company’s ability to serve a full spectrum of customers; and by reinforcing a customer-first approach.
Ultimately, De La Tour says, “By expanding our reach into the mid-market and enhancing our enterprise offerings, we are better positioned to work alongside clients, evolve their strategy and support their growth every step of the way.”
Sarah Gallo, CPTM, is a senior editor at Training Industry, Inc., and co-host of “The Business of Learning,” the Training Industry podcast. Email Sarah
LearnUpon, a Dublin-headquartered global learning technology provider, acquired Courseau, an artificial intelligence (AI)-assisted course authoring platform. The acquisition accelerates LearnUpon’s mission to make learning creation and delivery faster, smarter and more accessible for organizations everywhere. The new solution combines LearnUpon’s trusted delivery platform with Courseau’s AI-native authoring technology.
NIIT Learning Systems Limited, a global managed learning services provider, has announced a strategic partnership with Litmos, a learning solutions company. The partnership enables companies to transform their learning strategies through technology-driven, learnercentered solutions that build stronger, more agile workforces.
Meridian Knowledge Solutions expanded its partnership with OpenSesame, adding seamless access to Simon, OpenSesame’s AI-powered course creator, and Oro Skills, its personalized, skills-based learning solution. The integration strengthens Meridian’s ability to deliver strategic, scalable learning experiences for government, public sector and enterprise customers.
Staffbase, an AI-native employee experience platform, announced a strategic partnership with Cornerstone to help organizations deliver training to their entire workforce, including hardto-reach front-line employees. The integration brings Cornerstone’s training experiences and AI-powered content agents directly into the Staffbase Employee App.
isEazy launched AI Autopilot, a new tool that automatically creates comprehensive eLearning courses from corporate documentation or simple ideas. The technology applies instructional design, branding, accessibility and pedagogical standards, generating interactive activities, assessments and resources. Integrated into isEazy Author, it allows users to easily edit and customize AI-generated content, giving training teams a fully automated way to design ready-to-distribute courses.
CoachHub, a global digital and AI coaching platform, unveiled AIMY™ 2.0, just five months after AIMY™ 1.0 was released. Building on its early success with 60 enterprise clients and over
50,000 coaching conversations, AIMY™ 2.0 adds enhanced personalization, adaptability and precision. The update reinforces CoachHub’s mission to deliver continuous, skill-building coaching that’s accessible to all employees, anywhere, at scale.
Pearson launched Communication Coach, an AI-powered learning tool developed with Microsoft and integrated into Microsoft 365. The product analyzes speech, communication data and meeting interactions, providing real-time feedback on grammar, tone, clarity and professional communication. Communication Coach supports native and non-native speakers at any professional level, helping employees improve communication skills with personalized insights.
Coursera announced two new specializations from its partner Anthropic — “Building With the Claude API” and “RealWorld AI for Everyone.” The programs help developers and professionals work effectively with Claude and expand access to safe, inclusive AI education to support responsible innovation across industries.
Udemy announced a partnership with Emtrain to expand its compliance and workplace culture training offerings. Udemy Business customers will gain access to Emtrain’s content within the Udemy ecosystem, while Emtrain will collaborate with Udemy on opportunities requiring a unified, skills-based learning approach.
Mindtools and Kineo launched the AI & Cyber-Security Skills Accelerator, combining practical cybersecurity training with new AI awareness and ethics modules. The bundle equips employees to evaluate AI outputs, protect data and apply human oversight, while providing managers guidance on using AI safely. Updated annually, it helps organizations build digital resilience and reduce risk.







trainingindustry.com/top-training-companies