When AI Learns the Script: Rethinking “Interactive” Roleplays in L&D

"“MCP, when delivered well, delivers the determinism AI needs for business apps… it can use AI in clever ways rather than being a slop machine.”

In this post from the Figma team, "What Does MCP Mean for Agentic AI?", Jenny Xie explores how the emerging Model Context Protocol (MCP) is changing the way AI assistants interact with tools, apps, and the world around them.

Think: Agentic AI (models that take initiative and perform tasks on your behalf) that don’t just respond to your prompts, but take action in meaningful ways.

Here’s the TL;DR:

If MCP is adopted as a kind of standardized translator layer between AI models and external tools, then instead of building custom plugins or integrations for every app to the AIs, MCP can instead be used to let AI systems connect to various tools and data sources through a common language.

Think: ChatGPT being able to read your calendar, then pull from your design docs, and suggest edits without having to create an integration for each app.

What’s This Mean for L&D?

This could be applied to make more meaningful and adaptive learning experiences.

Think about this:

Let’s take a familiar example: A fixed roleplay training scenario for managers about how to handle difficult performance conversations. Traditionally, we might create this using Articulate Storyline with:

  • A branching scenario

  • A few pre-scripted dialogue trees

  • And at each branch, the learner is presented with a scene or scenario with a single character “stating” (via voice or text) a fixed scenario.

  • To this scenario, the learner is given a finite list of choices to select as their “response” to the scenario. E.g.,

    Learner/Manager: “I noticed you’ve been a late a few times this month…”

Option A: “…I need you to fix this immediately.”

Option B: “…Is something going on that’s impacting your schedule?”

Option C: “…Let’s talk about setting better expectations.”

It works, kind of… and I’ve always found these branching “roleplays” to be… meh.

And it rarely feels truly adaptive.

What If: Multiplayer Roleplay and Adaptive Context

Now let’s kick it up a notch. Consider a roleplay interaction where you, as a manager, practices having a difficult conversation with a defensive employee, AND HR is also an interactive player in the scenario to step in, if needed, AND a coach is also present (contextually) to observe the scenario and give feedback to you, the learner, on how well you performed in this conversation as a manager.

  • Player 1: The “Manager” = The learner (you)

  • Player 2: The “employee” (Rick) = An AI roleplay agent

  • Player 3: The HR Manager = Another AI roleplay agent

  • Player 4: Coach = Another AI agent (It observes the interactions, gives feedback to the learner, and scores their responses.)

Each of these players (the employee, the HR manager, and the coach) draws from real-time, scenario-specific data and interactions with you, the learner.

Further, each player can be endowed with certain characteristics and behavior that can be different than those of the other players.

Think: A behavioral profile for the employee, like:

  • Employee tone: argumentative, defensive, overworked, skeptical of management

  • Communication style: Blunt but not hostile

  • Motivations: “Wants autonomy”, “Feels unrecognized”

  • Stress triggers: Micromanagement, perceived unfairness

  • Recent issues: “Missed two deadlines in Q1”, “Received verbal warning for performance”

  • Emotional baseline: “Frustrated but trying to stay professional”

  • …Etc.

As the scenario proceeds, the HR Manager/AI agent can observe your interaction, remember what’s already been said, and step in, as needed, based on company policy, employee behavioral profile, and even fictionalized work history (for both the employee and the manager) designed for the scenario.

Meanwhile, the AI Coach/AI agent can observe the learner’s interactions with the other roleplay players and give feedback after the exercise.

Think about it. You’re no longer stuck in a finite decision-tree simulation. You’re in a dynamic conversation with realistic emotional nuance and branching reasoning, not just branching options.

Here's What Changes with MCP + Agentic AI

Roles Have Depth

Each AI “character” can access resources like:

  • Company policies (for procedural accuracy)

  • Fictional behavior logs (to simulate work history)

  • Past learner choices (to tailor responses)

All of this is done in real time, making the roleplay immersive and context-aware.

Realistic Dialog

Old School Roleplay Interaction

Instead of following an old school fixed script like:

Learner/Manager: “I noticed your productivity dropped last month. Is everything okay?”

Employee (Fixed response): “It’s been rough. I’ve been dealing with family issues and it’s affecting my focus. I didn’t think it was noticeable.”

Interaction: “Choose one of the following responses”

Option A: “…I need you to fix this immediately.”

Option B: “…Is something going on that’s impacting your schedule?”

Option C: “…Let’s talk about setting better expectations.”

The interaction proceeds down one of three fixed branching scenarios to “experience” the various scenarios play out.

New: Realistic Dialog

Instead of a fixed/branching interaction, with MCP, response generation is dynamic. We open up the possibilities of creating more engaging learning experiences where the next move matters. And if the learner misses an empathy cue? The HR AI can be designed to step in with a procedural prompt.

It’s Plugged Into the Company’s Policies

With MCP, these AI agents aren’t drawing from a static text file, they could be connected to the company’s source of truth.

  • HR policy databases

  • Feedback models

  • Learning histories

  • Custom scenario data packs

The AI can “act in alignment” with your scenario and organization, not just simulate general best practices.

Feedback Isn’t Generic

Instead of “You selected Option B,” the AI coach can say: “You showed empathy, but missed an opportunity to offer support resources. Here’s the relevant policy and a model phrase you could have used.”

Why This Matters

With the direction the tech is headed, we can see a not-too-distant future where L&D can be free from the constraints of pre-authored pathways.

It means we can:

  • Build once, but personalize infinitely

  • Create emotionally intelligent learning without live facilitators

  • Deliver roleplays that evolve with the learner

  • Scale towards solving the so called “2 Sigma” problem (i.e., the challenge of scaling one-on-one instruction to many many learners at once)

With MCP and agentic AI, our training scenarios won’t just be interactive; they can become alive.

As the technology matures — and it’s maturing fast — the bottleneck won’t be the tools. It’ll be the quality of the learning experience we design with them. (Btw, it’s annoying that I now have to keep saying this: the em dashes are mine.)

It’s our ability to design platforms that bring this to life in a way that’s scalable, meaningful, and deeply human.

Previous
Previous

L&D’s Maginot Line Moment: Are You Still Preparing for the Last War?

Next
Next

What Makes for a Thriving Organizational Learning Culture?