The L&D Showrunner: Full Book
I’m excited to share this book with you. I know it’s a little nuts to post a 120-page book on a single webpage, so if you’d like to download a PDF copy, click here. And if you have any questions or comments, or would like to learn more about how I can help train you or your organization in implementing this role, please do reach out.
CONTENTS
PART ONE: THE PROBLEM
Introduction — Nobody's Running the Show
Chapter One — The Content Factory
Chapter Two — What Television Figured Out
PART TWO: THE ELEVEN ROLES
Chapter Three — Taste Arbiter
Chapter Four — Brand Continuity Director
Chapter Five — Audience Architect
Chapter Six — Cliffhanger Engineer
Chapter Seven — Casting Director
Chapter Eight — Writers Room Facilitator
Chapter Nine — Production Economist
Chapter Ten — Renewal Strategist
Chapter Eleven — Compliance Translator
Chapter Twelve — Distribution Strategist
Chapter Thirteen — Cultural Continuity Keeper
PART THREE: BUILDING THE ROLE
Chapter Fourteen — The Bible
Chapter Fifteen — The Hire
Chapter Sixteen — The First 90 Days
PART ONE: THE PROBLEM
INTRODUCTION
Nobody's Running the Show
Why your L&D department has everything it needs except the one role that makes it work
Picture two pieces of content produced by the same company, inside the same L&D department, in the same fiscal year.
The first is a compliance module on data privacy. A manager recorded it on a laptop webcam in what appears to be a conference room that hasn’t been decorated since 2014. The audio cuts in and out. The slides are dense with bullet points in a font last seen on a mid-2000s PowerPoint template. The module is eleven minutes long. It has been assigned to every employee in the company. Completion rates are tracked. Nobody watches it voluntarily.
The second is an onboarding series. It was shot on location at three company offices. There’s original music. The subject matter experts were coached before they went on camera, and it shows — they’re specific, warm, occasionally funny. Each episode ends mid-thought, on a question the next episode answers. New hires are finishing it before they’re required to. Some are sending episodes to friends.
Both of these exist. Both cost money. Both took time. One works. One doesn’t. And in most organizations, nobody has ever asked why.
The answer isn’t budget. It isn’t talent. It isn’t the tools, the LMS, the instructional design methodology, or the learning philosophy. It’s simpler and harder than any of those things.
Nobody’s running the show.
• • •
Every great television series has a Showrunner. Not a producer, not a director, not a network executive — a Showrunner. The person who holds the whole thing. Who knows why episode seven has to land a specific way because of what happened in episode three. Who decides what the show sounds like, what it refuses to do, what it’s building toward across a season. Who makes the call when a scene works and when it doesn’t, not because they have the best taste in the room, but because it’s their job to have the final word.
The Showrunner is the reason great shows feel coherent and intentional. The reason you finish one episode and need the next. The reason a series has a voice that holds across twelve episodes written by eight different writers. The reason the show feels like it was made by someone, rather than assembled by a committee.
The Showrunner is also the reason bad shows feel like exactly that — like nobody was in charge. Like decisions were made by whoever was loudest in the room that week. Like the audience’s time was never quite valued.
Your L&D department is running a show. It has been running a show for years. It has subject matter experts who are essentially on-camera talent. It has instructional designers who are essentially writers. It has a distribution platform — the LMS — that is essentially a streaming service. It has a release schedule, an audience, performance data, and a budget.
What it almost certainly doesn’t have is a Showrunner.
The Showrunner is the reason great shows feel coherent and intentional. Your L&D department is running a show. It just doesn’t know it yet.
This book is about what happens when you fix that.
• • •
The timing of this conversation matters. We are at an inflection point in corporate learning that most L&D leaders haven’t fully reckoned with yet.
For most of the history of corporate training, the constraint was production. Making content was expensive, slow, and technically demanding. Scripting took time. Video required equipment and expertise. Translation and localization were significant projects. The scarcity of content was a natural filter — you only built what you could justify building, and the effort of building it imposed a kind of quality floor.
That constraint is gone.
AI has automated the production layer of content creation with a speed and completeness that most industries haven’t absorbed yet. Scripting, editing, voiceover, translation, quiz generation, basic motion graphics — the work that used to consume the majority of an instructional designer’s week is now achievable in hours. The cost of producing a module is dropping toward zero. The time it takes is dropping toward minutes.
This sounds like good news. In some ways it is. But it creates a problem that nobody in L&D is talking about clearly enough yet: when production is free, the only thing standing between your organization and an LMS full of content nobody watches is judgment. Taste. The ability to decide what’s worth making, how it should be made, whether it holds together with everything else you’ve made, and whether the person watching it will feel that their time was respected.
AI has no judgment. It has no taste. It doesn’t know what your organization sounds like. It can’t tell the difference between content that serves your employees and content that serves a stakeholder’s request. It will generate forty modules with the same enthusiasm it generates four, and it will feel equally satisfied with all of them.
The AI doesn’t know you need a Showrunner. But you do.
• • •
This is not a book about artificial intelligence, though AI is the reason this conversation is urgent. It is not a book about instructional design methodology, learning theory, or the science of retention, though all of those things matter. It is not a critique of L&D as a function or the people who work in it — most of whom are working hard inside systems that were never designed to produce great content.
This is a book about a single missing role. What it is, why it matters, and how to build it inside your organization before the window closes.
The Showrunner concept comes from television, but the role it describes is older and broader than any one industry. Every organization that produces content at scale — that has an audience, a voice, a body of work it’s accountable for — needs someone whose job is to hold the whole thing. To make the creative decisions that nobody else has the authority or the perspective to make. To be the person the content answers to.
In L&D, that person has rarely existed. The function has been organized around project management, instructional design methodology, and LMS administration. All necessary. None of them the Showrunner’s job.
What follows is an attempt to define the role completely — not as an abstract concept, but as a specific set of responsibilities, decisions, and capabilities that your organization either has or doesn’t. Eleven roles that together constitute the Showrunner function. Eleven areas where the absence of clear ownership is costing you engagement, coherence, and the trust of your employees.
You’ll recognize most of these problems. You’ve probably felt them without being able to name them. That’s what happens when the right role doesn’t exist — the problems it would solve accumulate without a clear diagnosis. People work harder. They produce more content. The LMS gets fuller. The completion rates stay flat. Nobody asks why, because the question feels too large and too uncomfortable.
The question isn’t why your content isn’t working. The question is who’s responsible for making it work. Until you answer that, everything else is rearranging furniture.
• • •
A note on how to read this book.
Part One diagnoses the problem in full. If you’re already convinced that something structural is broken in how your organization produces learning content, you can move quickly through these chapters. If you’re skeptical, read them slowly. The argument matters, and the evidence is in the details.
Part Two is the core of the book. Eleven chapters, one for each Showrunner role. Each chapter opens with a scene designed to make the problem concrete, defines the role, shows what it looks like done well and done badly, and ends with a single question worth asking your organization this week. Read these in order the first time. After that, treat them as a reference — return to the chapters most relevant to where your organization is right now.
Part Three is about building the role. Who the Showrunner is, where they come from, how to make the case for them internally, and what their first ninety days look like. This is the most practical section of the book, and the one most likely to require you to have a difficult conversation with someone who controls a budget or an org chart.
The Appendix contains the Showrunner Gap Index — a self-assessment tool that lets you score your organization across all eleven roles, identify your weakest areas, and prioritize where to start. Use it before you read Part Two, and again after. The difference in your answers will tell you something.
This book is short by design. The argument doesn’t need five hundred pages. It needs to be clear, complete, and honest about what it’s asking you to do. What it’s asking is not small — it’s a new role, a new way of thinking about your department’s output, and in some organizations, a direct challenge to how decisions have always been made. But the alternative — managing an AI content factory that your employees scroll past on their way to something worth watching — is not a future worth building toward.
Someone needs to run the show.
Let’s figure out who that is.
CHAPTER ONE
The Content Factory
What most L&D departments actually are, why it happened, and what it costs
Somewhere in your company’s LMS right now, there is a piece of content nobody has watched in fourteen months.
You probably know which one it is. It was built for a good reason — a compliance requirement, a product launch, a new manager’s initiative. Someone spent real time on it. There were review cycles. There was a launch email. The module went live, the completion numbers ticked up for a few weeks because it was assigned, and then it settled into a kind of digital permanence that has nothing to do with usefulness.
It’s still there. It will probably still be there in three years. Nobody assigned to remove it. Nobody asking whether it’s doing anything. Nobody responsible for that question.
Now multiply that module by forty. Or four hundred. That’s the LMS most organizations are actually running.
This is what a content factory looks like from the inside. Not a failure. Not negligence. A rational response to a set of incentives that were never designed to produce great content — only more of it.
• • •
The content factory didn’t happen by accident. It was built, piece by piece, by organizations responding sensibly to real pressures.
Compliance required documentation. Documenting training completion required an LMS. The LMS needed to be filled. Filling it required content. Content required instructional designers. Instructional designers were measured on output — modules completed, courses launched, hours of learning produced. The incentive at every level pointed in the same direction: more.
More was the metric because more was measurable. The alternative — measuring whether employees actually changed their behavior as a result of a training, whether the content was worth the time it took, whether the learning experience made someone better at their job — is harder to count. So organizations counted what they could count, and they built systems optimized for those counts.
The result is a function that is genuinely excellent at producing content and genuinely poor at producing content worth watching.
This is not an indictment of the people inside it. Most L&D professionals care deeply about their work and are frustrated by the same things this book is about. They know when a module is bad. They know when a stakeholder’s request is going to produce something nobody will use. They know that the completion rate on the compliance training has nothing to do with whether anyone learned anything. They are working inside a system that was never designed around the question they most want to answer: does this work?
The incentive at every level pointed in the same direction: more. More was the metric because more was measurable. The result is a function that produces content efficiently and content worth watching rarely.
The content factory is the system. The Showrunner gap is what happens when that system has no creative authority at the center of it.
• • •
Here is what a content factory looks like from the outside, which is to say from the perspective of the employee on the receiving end of it.
She opens her LMS on a Tuesday morning because something has been assigned. The thumbnail is a stock photo of two people shaking hands in front of a whiteboard. The title is “Effective Communication in the Workplace.” She has taken a version of this module at two previous employers. She knows before she clicks that it will be eleven to fourteen minutes long, that it will involve a scenario with characters named things like “Marcus” and “Jennifer,” that Marcus or Jennifer will demonstrate poor communication in the first scene and improved communication in the last, and that there will be a quiz at the end with questions that restate information from the module in slightly different words.
She clicks through. She passes the quiz. The completion registers. She has learned nothing she didn’t already know, and she knew before she started that she wouldn’t.
This is not a failure of execution. The module was probably built competently, by people who knew what they were doing, within the constraints they were given. It is a failure of something upstream of execution — the decision about what to make, for whom, in what form, to what end. The creative and strategic decisions that nobody with real authority was asked to make.
Multiply that experience across a career. Across every onboarding, every compliance cycle, every product launch training, every manager development program. The cumulative message your employees receive from their LMS is not the message any L&D leader would choose to send. It is the default message of a system with no one running it: your time is a resource we consume, not a relationship we respect.
The best L&D organizations in the world have figured out how to send a different message. The difference is not budget, though budget helps. It is not technology, though technology enables. It is creative authority. Someone whose job is to ask, for every piece of content: is this worth making? Is this worth watching? Does this sound like us? Is this better than what we made last time?
Someone running the show.
• • •
Before we talk about what the Showrunner role is, it’s worth being specific about what the content factory costs. Not in dollars — though the waste is real and significant — but in the things that are harder to put on a budget line.
It costs trust. Every piece of content your employees sit through that doesn’t respect their time is a small withdrawal from the account of goodwill your L&D function runs on. That account has a floor. When employees stop trusting that assigned content is worth their attention, they stop paying attention. Completion rates become a measure of compliance theater, not learning. The function loses the thing it most needs: a willing audience.
It costs coherence. A content factory produces modules. A learning organization produces a curriculum. The difference is not quantity — it’s whether the pieces connect. Whether what someone learns in month one prepares them for month three. Whether the onboarding series and the manager development program and the compliance training share a voice, a set of values, a consistent picture of what the company believes. A factory produces parts. A Showrunner assembles them into something.
It costs signal. One of the most underappreciated functions of great L&D is what it communicates about the organization that produced it. When a new hire goes through an onboarding experience that is clearly made with care — that is specific, honest, well-paced, and treats them as an intelligent adult — it tells them something about the company they’ve joined. When they go through one that isn’t, it tells them something too. The content factory sends a signal it doesn’t know it’s sending.
And now it costs something new.
• • •
The AI inflection point changes the economics of the content factory in ways that most L&D leaders are only beginning to understand.
For most of the history of corporate training, content scarcity was a natural filter. Making things took time and money. That constraint meant you thought carefully before committing to a project. You prioritized. You made tradeoffs. The difficulty of production was an imperfect but functional check on the impulse to build everything.
AI removes that check. When content costs almost nothing to produce, the only thing preventing an organization from drowning its LMS in material is the judgment of the people deciding what to make. And in most organizations, that judgment is distributed — across stakeholders, subject matter experts, business unit leaders, whoever submitted the content request. It belongs to everyone, which means it belongs to no one.
The content factory, supercharged by AI, becomes a content flood. Thousands of modules, generated quickly, reviewed minimally, deployed broadly, watched rarely. The LMS fills. The search function becomes the only way to navigate it. The employee experience degrades further. The signal gets noisier.
The organizations that avoid this outcome will not do so by restricting access to AI tools. They will do so by building the Showrunner role before the flood arrives — by creating a center of creative gravity that all content answers to, that maintains standards not through bureaucracy but through taste, authority, and accountability.
The organizations that don’t will manage a content factory that has learned to run itself. Fast, cheap, and full of things nobody watches.
The content factory, supercharged by AI, becomes a content flood. The organizations that avoid this outcome will do so by building the Showrunner role before the flood arrives.
• • •
Here is the core argument of this book, stated plainly.
Your L&D department is producing content without a coherent creative authority at its center. This produces inconsistency, waste, declining employee trust, and content that fails to achieve the behavioral outcomes it was built to drive. The problem is not your people, your tools, your methodology, or your budget. The problem is a missing role.
That role is the Showrunner.
The Showrunner is not a new title for the CLO. It is not a rebrand of the instructional design function. It is a specific set of responsibilities — ten of them, which we will work through in Part Two — that together constitute the creative and strategic leadership your content operation needs but has never had.
Some organizations have a person doing fragments of this work already. A senior instructional designer who has become the unofficial keeper of standards. A creative director who wandered in from marketing. A CLO who still insists on reviewing everything before it ships. If you have someone like this, you are closer than you think. The work ahead is to formalize the role, give it authority, and build the systems around it that let it scale.
If you don’t have someone like this, the work is harder but more important. You are running a content factory with no one responsible for whether what it produces is any good. That is a problem you can solve. But you have to decide to solve it, which means naming it first.
You’re running a show without a Showrunner.
The next chapter explains what television figured out about that problem — and why the solution translates more directly to your department than you might expect.
CHAPTER TWO
What Television Figured Out
The origin of the Showrunner role, why it exists, and why your department needs the same solution
In the early decades of American television, nobody ran the show.
That’s not quite accurate — plenty of people were in charge of things. The network owned the schedule and controlled the budget. The producer managed logistics and relationships. The director called the shots on set, episode by episode. Writers wrote what they were assigned. Studio executives weighed in on everything from casting to dialogue to how a character’s hair was cut in the third act.
What nobody did was hold it all. Nobody was responsible for the show as a unified creative work — for whether episode seven was consistent with episode three, for whether the characters sounded like themselves across a season, for whether the whole thing was building toward something or just filling a time slot.
The result was television that often felt exactly like that: assembled rather than authored. Competent, sometimes entertaining, rarely coherent in the way that makes an audience feel genuinely seen by what they’re watching.
Then something changed.
• • •
The change didn’t happen on a specific date or as a result of any single decision. It emerged over decades as a new kind of creative professional found their way to the center of the production process. They were usually writers first — people who understood story at the sentence level, who knew why a scene worked or didn’t, who could feel the difference between a character acting consistently and a character acting conveniently for the plot.
What distinguished them wasn’t just talent. It was scope. Where a director thinks in episodes and a writer thinks in scenes, the Showrunner thinks in seasons. They hold the longest view in the room. They know what the show is trying to say across thirteen episodes, and they make every creative decision — casting, tone, pacing, music, what to cut, what to expand — in service of that larger argument.
The title formalized over time. By the 1980s and 1990s, as serialized drama began to reward the kind of long-arc storytelling that required someone to hold continuity across years of production, the Showrunner became the central figure in American television. Not the most visible — stars and directors got the fame. But the most essential. The person without whom the show, as a coherent work, could not exist.
You can trace the quality of almost any television series to the presence or absence of a strong Showrunner. The shows that feel authored — that have a consistent voice, a sense of purpose, a reason for every creative choice — have one. The shows that feel like they were made by a rotating committee of competing visions, that shift tone unexpectedly, that introduce characters and abandon them, that seem to forget what they established two episodes ago — those shows either lack a Showrunner or have one whose authority has been compromised by the people above them.
The Showrunner thinks in seasons. They hold the longest view in the room — and they make every creative decision in service of a larger argument that only they are responsible for.
The lesson television learned, slowly and through a lot of bad television, is that great creative work requires a single point of creative accountability. Not committee approval. Not distributed ownership. One person who is responsible for whether the whole thing works, and who has the authority to make the decisions that ensure it does.
• • •
Now look at your L&D department through that lens.
You have writers — instructional designers who spend their days crafting learning objectives, scripting scenarios, sequencing content. They are good at what they do. But they are writing episodes, not seasons. Each project arrives as its own assignment, with its own stakeholder, its own timeline, its own definition of done. The relationship between this module and the thirty others in the same LMS is rarely anyone’s explicit concern.
You have directors — project managers, production coordinators, the people who get things shipped on time and within budget. Essential. But their job is execution, not vision. They are optimizing for the deliverable, not for whether the deliverable serves something larger.
You have a network — the business, the stakeholders, the executives who commission content and approve budgets. They know what they want. They are rarely equipped to know what the audience needs, which is a different question entirely.
You have an audience — your employees, who arrive at the LMS with finite attention, real skepticism, and a finely calibrated sense of whether their time is being respected.
What you almost certainly don’t have is the person who holds the whole thing. Who knows why this module has to land a specific way because of what was built last quarter. Who can tell you whether the voice in this new onboarding series is consistent with the voice in the manager development program. Who makes the call — and has the authority to make it — when a stakeholder’s request would produce content that undermines what you’ve been building.
The structure of your L&D department is almost identical to the structure of early television. Producers, directors, writers, network executives — everyone in their role, everyone doing their job, nobody running the show.
You know what that produced in television. It produced forty years of content that was, with occasional exceptions, forgettable.
• • •
The parallel goes deeper than org structure. Consider what a television Showrunner actually spends their time on, and compare it to the decisions that go unmade or under-made in most L&D departments.
A Showrunner maintains the show bible — the document that establishes the rules of the world, the voice of each character, the things the show will and will not do. It is the creative constitution that every writer checks before they write a scene and every director references before they block one. Without it, a show with multiple writers quickly becomes multiple shows wearing the same costume.
Your L&D department almost certainly has no equivalent document. Standards may exist, in some form — style guides, template libraries, brand guidelines borrowed from marketing. But a living document that defines the voice of your learning content, the rules of your instructional world, the things your department will and will not do creatively? Rarely. The L&D bible is the first thing a Showrunner builds, and the last thing most L&D departments think to create.
A Showrunner thinks about casting as a creative decision. Not just who is available or who the stakeholder wants on camera, but who should deliver this content to this audience at this moment. The same information lands differently depending on who presents it. A peer carries credibility that an executive can’t. An outside expert carries authority that an insider can’t. A character in a scenario can say things a real person can’t. Casting is one of the highest-leverage decisions in content production, and in most L&D departments it is made by default — whoever the subject matter expert is, whoever is available, whoever volunteered.
A Showrunner engineers the end of every episode to make the audience want the next one. They think about momentum, about what gets left unresolved, about the question that sits in the viewer’s mind after the credits roll. Most L&D content ends when the content ends. There is no architecture of compulsion. There is no reason, built into the experience itself, to return.
A Showrunner cancels shows. When a series isn’t working — when the audience isn’t responding, when the premise hasn’t delivered on its promise, when resources would be better spent elsewhere — a Showrunner makes the call to end it. Most L&D content never gets cancelled. It just accumulates, taking up space in the LMS, appearing in search results, occasionally getting assigned to someone new who completes it without learning anything. The content factory does not have a mechanism for cancellation because cancellation requires judgment, and judgment requires authority, and authority requires someone running the show.
A Showrunner cancels shows. Most L&D content never gets cancelled. It just accumulates — taking up space, appearing in search results, occasionally getting assigned to someone who learns nothing from it.
These are not exotic functions. They are the basic creative and editorial decisions that every content operation needs someone to make. Television learned this lesson. Publishing learned it. Podcasting learned it. Music learned it. Every medium that has matured from production-focused to audience-focused has produced a version of the Showrunner role — someone whose job is not to make content, but to make the content worth experiencing.
L&D is the last major content operation that hasn’t learned it yet.
• • •
There is an objection worth addressing here, because it comes up every time this conversation happens.
The objection is that corporate learning is fundamentally different from entertainment. That the goal of L&D is not to be engaging — it’s to change behavior, drive performance, meet compliance requirements. That borrowing concepts from television risks confusing the means with the end. That a well-produced module that nobody learns from is worse than a poorly produced one that changes how someone does their job.
This objection is correct about the goal. It is wrong about the implication.
The best Showrunners are not trying to entertain. They are trying to make audiences feel something true. The engagement is not the point — it is the mechanism. A viewer who is not engaged is not watching. An employee who is not engaged is not learning. You cannot separate the question of whether content is worth watching from the question of whether it works, because content that nobody watches cannot work at all.
The Showrunner’s job in L&D is not to make training feel like Netflix. It is to bring the same quality of creative thinking to the design of learning experiences that great television brings to the design of viewing experiences. To ask not just what needs to be communicated but how it should be communicated, by whom, in what order, with what stakes, to what end.
That is not an entertainment question. That is a learning design question. The Showrunner is the person who answers it.
• • •
Before we move to the eleven roles that define the Showrunner function, it’s worth sitting with one more observation from television history.
The shows that defined the medium — the ones that are still discussed, still studied, still watched — were not the ones with the biggest budgets. They were not always the ones with the most famous talent. They were the ones where someone with a clear vision had the authority to execute it consistently, over time, in service of an audience they genuinely respected.
The Sopranos was not a better show than its contemporaries because it had more money. It was a better show because David Chase had a precise idea of what it was and the authority to make it that way, every episode, for eight years. The same is true of every series that has earned a permanent place in the culture.
The best L&D organizations work the same way. Not the ones with the biggest budgets or the most sophisticated tools. The ones where someone has a clear idea of what the learning experience should feel like, the authority to make it that way, and the discipline to hold that standard across every piece of content the department produces.
That someone is the Showrunner.
In Part Two, we define exactly what they do.
PART TWO: THE ELEVEN ROLES
CHAPTER THREE — SHOWRUNNER ROLE 1 OF 10
Taste Arbiter
When anyone can make content, the only differentiator is judgment about what’s good
The review meeting has been going for forty minutes.
On the screen at the front of the room is a module the team has spent six weeks building. The production is clean. The script is accurate. The subject matter expert was well-prepared. By every measurable standard, it is a competent piece of learning content.
It is also, unmistakably, bad.
The pacing is wrong — not technically wrong, not in violation of any standard, but wrong in the way that a conversation can be technically correct and still feel like a chore to sit through. The tone is slightly off from everything else the department has made. There’s a section in the middle that goes on too long because the subject matter expert wanted it there and nobody felt they had standing to disagree. The opening scenario is meant to be relatable but lands as slightly condescending, treating the learner as someone who needs to be tricked into caring.
Everyone in the room can feel it. Nobody says it.
The module ships. Six months later, completion rates are average and behavioral outcomes are unmeasured. The team moves on to the next project. The quiet, shared knowledge that the module wasn’t quite right gets filed somewhere wordless and forgotten.
This is what an organization without a Taste Arbiter looks like. Not catastrophic. Just consistently, invisibly below what it could be.
• • •
THE ROLE
The Taste Arbiter is the person in your L&D department who has both the judgment to know when something is good and the authority to say so when it isn’t.
That description sounds simple. It is, in practice, among the rarest things in a corporate environment.
Judgment — real aesthetic and editorial judgment, the kind that can tell the difference between content that respects the learner and content that merely covers the material — is not widely distributed. It develops through sustained exposure to great content and sustained practice making content, through years of noticing what works and building an internal library of why. It is not teachable in a training session. It is not captured in a rubric. It lives in a person, and that person either exists in your organization or they don’t.
Authority is the other half, and it is equally scarce. In most L&D departments, final approval on content is distributed across stakeholders who have subject matter expertise but no creative authority, project managers who have process expertise but no editorial judgment, and executives who have organizational authority but no time. The result is that nobody can say “this isn’t good enough” and have it mean anything actionable.
The Taste Arbiter has both. They are the person whose “no” stops a module from shipping and whose “yes” gives the team confidence that what they’ve made is worth the audience’s time. They are the creative conscience of the department, and they operate with enough authority that their conscience actually functions.
Judgment is not widely distributed. It develops through years of noticing what works and building an internal library of why. It lives in a person — and that person either exists in your organization or they don’t.
• • •
WHAT IT LOOKS LIKE DONE WELL
A senior learning leader at a technology company — one of the few L&D departments that has built something approaching a Showrunner function — describes her review process this way: she watches every piece of content the team produces before it ships, not to check facts or approve messaging, but to answer one question. Would I be embarrassed if a new employee’s first experience with this company was this module?
That question is not in any style guide. It is not a rubric item. It is a taste judgment, and it catches things that rubrics don’t. It catches the scenario that is technically correct but subtly implies that employees can’t be trusted. It catches the expert interview that is informative but so dry that nobody will finish it. It catches the module that covers everything on the content brief and communicates nothing worth knowing.
When something fails her test, she sends it back. Not with a list of changes — with a description of the feeling it produces and a question about what the team was trying to make the learner feel instead. The revision conversation starts from the audience’s experience, not from the content checklist. The team has internalized this over time. They have started asking her question themselves before the review meeting, which means fewer things fail the test — not because the standard has dropped, but because the standard has become shared.
That is what a Taste Arbiter does at full function. Not just catching bad work before it ships. Raising the collective taste of the entire team over time, until the standard lives in the room rather than in a single person.
• • •
WHAT IT LOOKS LIKE DONE BADLY
The most common failure mode for taste in L&D is not the absence of standards but the presence of the wrong ones.
Many departments have detailed style guides, production checklists, and quality review processes. These are useful. They are also not taste. They catch technical errors, consistency violations, and accessibility failures. They do not catch content that is dull, condescending, or subtly dishonest about what the learner is actually going to experience on the job.
A checklist can tell you that a module meets the minimum screen time per slide. It cannot tell you that the module is treating intelligent adults like they need to be walked through information they already know. A rubric can verify that learning objectives are measurable. It cannot tell you that the learning objectives are the wrong ones — that they are measuring the easy thing rather than the important thing because the important thing is harder to test.
The second failure mode is taste by committee. When everyone has a vote on whether something is good, the result is content that offends nobody and moves nobody. Every strong choice gets smoothed. Every interesting decision gets questioned until it becomes a safe one. The module that could have been memorable becomes the module that is merely inoffensive.
Taste is not democratic. Great editorial judgment is not the average of everyone’s preferences. It is the considered opinion of someone who has thought longer and harder about the audience’s experience than the people who made the content. It requires a person, not a process, and it requires that person to have enough authority that their opinion can survive a stakeholder who disagrees.
The third failure mode is confusing taste with preference. A Taste Arbiter is not someone who imposes their personal aesthetic on every piece of content. They are someone who can distinguish between “I would have done it differently” and “this doesn’t work.” The first is preference. The second is judgment. Great Taste Arbiters are ruthless about the second and generous about the first. They let the team make choices they wouldn’t have made, as long as those choices serve the audience. They draw the line at choices that don’t.
Taste is not democratic. Great editorial judgment is not the average of everyone’s preferences. It requires a person — and it requires that person to have enough authority that their opinion can survive a stakeholder who disagrees.
• • •
THE AI DIMENSION
The Taste Arbiter role has always mattered. AI makes it the most critical role in the department.
When content production required significant human effort, bad taste was expensive. You couldn’t afford to make very much content that was off-tone, condescending, or dull, because making content cost real money and real time. The economics imposed a partial filter.
When content production costs almost nothing, bad taste becomes catastrophic at scale. An organization without a Taste Arbiter and with access to AI content generation tools will fill its LMS with content that is technically proficient, factually accurate, stylistically consistent with itself, and completely incapable of moving a person. It will feel like content made by something that has processed a great deal of information about what learning content looks like without having any understanding of why learning content matters.
This is not a hypothetical. It is already happening in organizations that have adopted AI content tools without building the editorial layer that makes those tools produce something worth using. The volume increases. The quality floor drops. The audience — your employees — learns to treat the LMS as a compliance mechanism rather than a resource, and that lesson, once learned, is very hard to unteach.
The Taste Arbiter is the person who prevents this. Not by restricting the use of AI tools — those tools are genuinely useful and will only get more so — but by ensuring that everything those tools produce passes through a judgment that the tools themselves cannot exercise. The human in the loop who asks, every time: is this good enough for the people who have to sit through it?
• • •
FINDING YOUR TASTE ARBITER
The Taste Arbiter is rarely the most senior person in the L&D department, though it can be. It is rarely the most technically skilled instructional designer, though they may have started there. It is the person who, when they watch a piece of finished content, can articulate precisely what is wrong with it — not just that something feels off, but why, and what a better version would feel like.
They usually come from one of a handful of backgrounds: journalism, where the discipline of editing for a reader’s experience is built into the craft. Film or television production, where every decision is evaluated against its effect on the audience. Brand or creative direction, where consistency and voice are the job. Occasionally, they are deeply experienced instructional designers who have spent years developing their editorial instincts alongside their technical ones.
What they share is a practice of watching content from the audience’s position rather than the producer’s. Where most people who make content ask “did we cover everything?” the Taste Arbiter asks “would I want to keep watching?” These are different questions. The first is about completeness. The second is about respect. Your employees deserve both, and the Taste Arbiter is the reason they get them.
Look for this person in your organization before you look outside it. They may be doing fragments of this work already — the instructional designer who rewrites scripts on their own initiative because they can’t help themselves, the project manager who keeps raising the same kinds of concerns in review meetings, the CLO who asks the same question about every module before it ships. If you find them, formalize what they’re already doing. Give it a name and give it authority.
If you don’t find them internally, you are looking for a specific profile in a hire: someone who has made content at a high level in another field and is curious about applying that discipline to learning. They will not have an instructional design background. They will have something better — a developed sense of when something is working and the words to explain why when it isn’t.
• • •
The question to ask your organization this week: When did we last send something back because it wasn’t good enough — not because it was wrong, but because it was below the standard our audience deserves?
CHAPTER FOUR — SHOWRUNNER ROLE 2 OF 11
Brand Continuity Director
The bible, the voice, and the document that keeps a department coherent across years and dozens of projects
Imagine you could watch every piece of learning content your organization has produced in the last five years in a single sitting.
Not just the flagship programs — the onboarding series, the manager development curriculum, the leadership academy. Everything. The compliance modules. The product training. The one-off videos made for a specific team at a specific moment that nobody archived properly. The soft skills courses licensed from a vendor because there wasn’t time to build them. The short tutorials recorded during the pandemic that were supposed to be temporary and never got replaced.
Watch them all in sequence and ask yourself one question: does this feel like it was made by the same organization?
The answer, in almost every company, is no.
Not because the content is bad — some of it is excellent. Not because the people who made it didn’t care — most of them cared deeply. But because nothing connected them. No shared voice. No consistent relationship with the learner. No visual language that holds across years and projects and the inevitable turnover of the people who made it. Five years of content that tells five different stories about what kind of company this is.
An employee who has been at the organization for three years has sat through all of it. They have absorbed, without being able to articulate it, the message that the L&D function sends when it has no Brand Continuity Director: we are not one thing. We are many departments making many decisions that happen to land in the same system.
That is not the message any L&D leader would choose to send. It is the message of a department without a bible.
• • •
THE ROLE
The Brand Continuity Director is responsible for maintaining a coherent identity across everything the L&D department produces. Not uniformity — a leadership program and a compliance module should feel different, the way a drama and a documentary can feel different while both being unmistakably from the same network. But coherence. A shared set of values about what the learner deserves, what the organization sounds like, and what the experience of engaging with this department should feel like from the first module to the last.
The primary tool of the Brand Continuity Director is the bible.
In television, the show bible is the document that precedes production. It establishes the world: who the characters are, how they speak, what they want, what the show will and will not do. It is the creative constitution that writers check before they write a scene, that directors reference before they make a visual decision, that showrunners use to arbitrate disputes about whether a choice is in character or out of character for the show.
The L&D bible serves the same function. It is the document that answers, once and for all, the questions that currently get answered differently by every project team: What is the tone of this department’s content? How does it address the learner — as a colleague, a student, a professional? What does it assume about the learner’s intelligence and experience? What visual language holds across all content? What language does it use, and what language does it refuse? What does it believe about learning — about what works, about what respects the learner’s time, about what the relationship between this department and its audience should feel like?
These questions are currently being answered by default. The bible answers them on purpose.
The show bible is the document that precedes production. The L&D bible serves the same function — answering on purpose the questions that currently get answered by default on every project.
• • •
WHAT IT LOOKS LIKE DONE WELL
A consumer goods company with a global L&D team distributed across fourteen countries built their bible over six months, starting from a simple observation: content made in their North American headquarters sounded completely different from content made in their European offices, which sounded completely different from content made in their Asian markets. Not just in language — in tone, in the relationship assumed between the content and the learner, in the visual choices, in the pacing.
Their solution was not to impose the North American voice on every market. It was to find what was true across all of them — the values about the learner that every team shared even if they expressed them differently — and build the bible from that foundation. The document they produced was not a style guide. It was a set of commitments: we treat the learner as an expert in their own context. We never explain things they already know. We trust them to apply information without a scenario that dramatizes the obvious. We always tell them why something matters before we tell them what it is.
Those commitments were abstract enough to allow for cultural variation and specific enough to actually change decisions. A team in Germany could apply them differently than a team in Brazil and still produce content that felt, to someone who had experienced both, like it came from the same department. The bible had done its job: not enforcing sameness, but ensuring coherence.
The Brand Continuity Director in this case was a head of L&D who had spent time in both creative production and organizational development. She understood that the bible was not a document to be written once and filed. It was a living reference that needed to be revisited when the company went through significant change, updated when new content formats emerged, and actively used in review conversations rather than mentioned occasionally and ignored in practice. She reviewed every major piece of content against it before it shipped. When something violated a commitment, she named the commitment it violated — not “this doesn’t feel right” but “this assumes the learner needs to be shown what we’ve already told them we trust them to know.” That level of specificity changed the conversation.
• • •
WHAT IT LOOKS LIKE DONE BADLY
The most common failure of brand continuity in L&D is the style guide that gets mistaken for a bible.
Style guides are useful. They specify fonts, colors, template layouts, logo usage. They prevent the most visible inconsistencies — the module that uses the wrong brand blue, the video that opens with a logo animation from three rebrands ago. These are real problems worth solving.
They are not the problem the bible solves. The bible is not about how content looks. It is about what content believes. An organization can have perfect visual consistency across its entire LMS and still produce content that sends completely different messages about its relationship with its employees. The modules can all use the same font and still disagree, implicitly, about whether the learner is an intelligent adult or a compliance risk to be managed.
The second failure mode is the bible that gets written as a project and then abandoned as a document. This happens often. An L&D leader commissions a voice and tone guide. A vendor or internal team produces it. It gets announced, distributed, and filed. Within eighteen months, nobody is checking it. Within three years, it doesn’t reflect the organization anymore because the organization has changed and the document hasn’t.
A bible that isn’t used is worse than no bible, because it creates the illusion of consistency without the substance. Teams can point to it when asked about standards while making no actual decisions on its basis. The Brand Continuity Director’s job is not to produce the bible. It is to use it — actively, visibly, in every review conversation, as the living reference it was built to be.
The third failure is treating continuity as the enemy of creativity. Some L&D leaders resist the bible because they fear it will produce sameness — that every piece of content will start to feel like every other piece. This fear is understandable and almost always wrong. The bible doesn’t constrain creative choices. It constrains choices that are inconsistent with what the department has committed to be. Within those constraints, there is enormous room for variety, experimentation, and surprise. The best television shows are instantly recognizable and endlessly varied. The bible is the reason for both.
The bible doesn’t constrain creative choices. It constrains choices inconsistent with what the department has committed to be. Within those constraints, there is enormous room for variety, experimentation, and surprise.
• • •
THE AI DIMENSION
AI is a bible problem waiting to happen.
Every AI content tool is, in some sense, a very fast writer who has read a great deal of L&D content but has never worked at your company. It knows what learning content generally sounds like. It does not know what your learning content specifically sounds like. It does not know the commitments in your bible, because your bible does not exist in its training data. It will write in the voice of the average corporate L&D department, which is nobody’s goal.
The organizations that use AI content tools well have learned to treat the bible as the primary input. Before generating anything, they feed the tool their voice commitments, their assumptions about the learner, their list of language they use and language they refuse. They treat the AI as a very capable writer being briefed for the first time on a show they’ve never worked on. The briefing is the bible.
Without that briefing, AI-generated content will be consistent with itself in ways that have nothing to do with your organization’s identity. It will use a tone that sounds professional and generic. It will make assumptions about the learner that your bible might explicitly reject. It will produce content that, at scale, gradually replaces your organization’s voice with the averaged voice of every organization the model has processed.
The Brand Continuity Director is the person who prevents this. Not by reviewing every AI output manually — that doesn’t scale and defeats the efficiency purpose of the tools. But by ensuring that the bible is specific enough to function as a prompt, by training the team to use it that way, and by establishing the review standard that catches content that has drifted from the organization’s voice regardless of how it was produced.
• • •
BUILDING THE BIBLE
The bible is not a long document. The best ones are eight to twelve pages. They are dense with specificity and short on explanation, because explanation is for onboarding conversations and the bible is for decisions. When someone is in a review meeting arguing about whether a tone choice is right or wrong, they need a sentence they can point to, not a paragraph they need to interpret.
A complete L&D bible answers five questions.
Who is the learner? Not a demographic description — a relational one. What does this department assume about the person on the other side of its content? What do they already know? What do they deserve? What is the implicit contract between this department and its audience?
What is the voice? Not adjectives — examples. Show what the voice sounds like in practice. Write two versions of the same sentence: one that violates the voice and one that embodies it. The contrast is more useful than any description.
What does this department believe about learning? Every L&D department has a set of implicit assumptions about how people learn, what makes learning stick, what the relationship between content and behavior change actually is. Making those assumptions explicit creates a filter for every production decision. If you believe that people learn by doing, not by watching, that belief should show up in every piece of content you make — and the bible should make it impossible to produce passive content without explicitly acknowledging that you’re making an exception.
What will this department never do? The negative space of the bible is as important as the positive. Name the things your content refuses. The condescending scenario that explains things the learner already knows. The compliance-first framing that treats the employee as a risk to be managed. The wall-of-text module that demonstrates disrespect for the learner’s time without using those words. The things you refuse are often more defining than the things you do.
How does the department evolve? A bible without a revision process is a historical document. Build in the trigger: this document is reviewed when the company undergoes significant cultural or strategic change, when new content formats emerge that the current bible doesn’t address, and on an annual basis regardless. Name the person responsible for calling that review. Without a named owner and a named trigger, the bible ages out of relevance and nobody notices until the damage is done.
• • •
The question to ask your organization this week: If a new employee watched every piece of content in your LMS, what would they conclude about what kind of company this is — and is that the conclusion you’d choose?
CHAPTER FIVE — SHOWRUNNER ROLE 3 OF 11
Audience Architect
Designing a curriculum like a season — not a library of episodes but a sequence that builds toward something
A new manager joins your organization on a Monday.
By the end of her first quarter, she will have completed an onboarding program, three assigned compliance modules, a product certification, and the first two units of a leadership development curriculum her manager recommended. She will have spent somewhere between twelve and twenty hours inside your LMS. She will have encountered content produced by at least four different teams, in three different formats, with three different implicit assumptions about who she is and what she needs.
Ask her what she learned and she will give you an honest answer. She learned the compliance material well enough to pass the assessments. She retained parts of the onboarding content that were specific and memorable. The leadership curriculum she found interesting but disconnected — each unit felt complete in itself, but she couldn’t have told you what the whole thing was building toward. The product certification she finished because it was required.
Now ask her what the experience taught her about your company as a place that takes learning seriously.
This is the question most L&D departments never ask. The answer, in most organizations, is something like: it taught me that training is something that happens to me, not something I seek out. It taught me that the LMS is where I go when something is assigned. It taught me that learning, here, is compliance infrastructure with occasional exceptions.
That is the experience of an audience with no Architect.
• • •
THE ROLE
The Audience Architect is the person responsible for designing the learner’s experience as a sequence rather than a collection. Not individual modules — the arc. What the learner knows after module two that changes how they experience module five. What the onboarding series is building toward. What the relationship is between the compliance training and the culture it’s meant to reinforce. What a year inside this organization’s learning ecosystem is supposed to feel like, and what it’s supposed to produce.
The distinction between a library and a curriculum is the Audience Architect’s central concern.
A library is a collection of content organized for retrieval. It is useful. It allows people to find things they need when they need them. A library does not have a point of view about what you should encounter first, what should come next, or what the whole thing is trying to make you into. A library is neutral about sequence.
A curriculum is a designed experience. It has an argument — a belief about what the learner needs to know in what order, why that order matters, and what the learner will be able to do at the end that they couldn’t do at the beginning. A curriculum is opinionated about sequence. It is built by someone who has thought carefully about the learner’s journey, not just the learner’s access.
Most corporate LMS platforms are libraries. Most corporate L&D teams think they are building curricula. The Audience Architect is the person who knows the difference and makes the distinction real in practice.
A library is neutral about sequence. A curriculum is opinionated about it. Most corporate L&D teams think they are building curricula. The Audience Architect is the person who makes the distinction real.
• • •
THE SEASON MODEL
Television is useful here not as a metaphor but as a structural model.
A Showrunner designing a season thinks in terms of arcs. The audience enters the season in one state — with certain knowledge, certain expectations, certain relationships with the characters — and exits in a different state. The season is designed to produce that transformation. Every episode contributes to it. Some episodes establish information that won’t pay off until episode eight. Some create tension that episode four resolves. The season is not a sequence of self-contained stories. It is a single story told in installments, and every installment is designed with the whole in mind.
The Audience Architect applies this model to learning design. An onboarding program designed as a season has a beginning — what the new employee knows on day one, what they need to feel and understand in the first week to be able to receive everything that follows. It has a middle — the building of context, the introduction of complexity, the deliberate sequencing of concepts so that each one prepares the ground for the next. It has an end — a state the employee is in at the conclusion of their onboarding that is meaningfully different from where they started, and that has been deliberately engineered by the sequence of experiences that preceded it.
Most onboarding programs are not built this way. They are built as collections: here is the compliance module, here is the culture video, here is the product overview, here is the benefits information. The sequence is often determined by administrative convenience rather than learning logic. The employee finishes knowing a set of things that have no particular relationship to each other, having had no experience of content that built on itself, having been given no sense of what they were becoming in the process.
The season model asks a different set of questions before a single piece of content is built. What does the learner need to feel in week one that they won’t be able to feel in week four? What misunderstanding is most likely to form if we don’t address it early? What piece of context, if given in month one, would make every subsequent piece of content land differently? What is the learner capable of understanding at the end of this program that they genuinely could not have understood at the beginning, and what sequence of experiences produces that capability?
These are design questions. They require someone whose job is to hold the learner’s entire journey in mind while every other member of the team is focused on their individual deliverable. That is the Audience Architect.
• • •
WHAT IT LOOKS LIKE DONE WELL
A financial services firm redesigned their new advisor onboarding from a compliance-first checklist into a nine-week curriculum built around a single question: what does it feel like to be a client of this firm, and how does that feeling get created?
The sequence was designed backward from that question. In the final week, new advisors were given a real client scenario and asked to manage it. In week seven, they were introduced to the firm’s most complex products and the client situations in which they applied. In week three, they were given the compliance training — but framed not as regulatory obligation but as the specific client situations that the regulations were designed to protect. In week one, they spent three days as a client: calling the service line, navigating the website, reading the onboarding materials that clients receive.
The sequence was designed so that each experience created a question that the next experience answered. Week one created the question: how does this firm actually work from the inside? Week two answered it. Week three created the question: what can go wrong for clients, and how does the firm prevent it? Week four answered it. By week nine, the advisors had been prepared for the final scenario not by being told everything they needed to know, but by having been taken through a sequence of experiences that built their capacity to handle exactly that scenario.
The Audience Architect in this case was a learning designer who had spent years in client experience before moving to L&D. She brought the outside-in perspective that the curriculum had always lacked. She understood that the learner’s experience of the onboarding was itself a message about the firm’s values — that a firm that started new advisors with a week of experiencing the business from the client’s perspective was demonstrating, not just asserting, that client experience was the central value.
• • •
WHAT IT LOOKS LIKE DONE BADLY
The most common failure of audience architecture is the modular fallacy: the belief that if each individual piece of content is good, the curriculum will be good.
It won’t. A curriculum is not the sum of its modules any more than a season of television is the sum of its episodes. What makes a season work is not the quality of the individual episodes in isolation — it is the relationship between them. The callback in episode nine that only works because of what was established in episode two. The character decision in episode six that recontextualizes everything that came before it. The tension that builds across four episodes and releases in the fifth. These effects require someone thinking about the whole. A team of people each making excellent individual modules will not produce them by accident.
The second failure is designing for the average learner rather than the learner’s journey. An average-learner design asks: what does someone in this role generally need to know? A journey design asks: what does this specific person need to know at this specific moment in their development, given what they already know and what they are about to encounter? These questions produce different curricula. The first produces content libraries. The second produces experiences.
The third failure is treating completion as the goal. A curriculum designed for completion is a curriculum designed to be finished, not to change the learner. These are different design goals and they produce different content. A module designed for completion minimizes friction: clear objectives, straightforward scenarios, assessment questions that closely mirror the content. A module designed to change behavior creates productive friction: scenarios that don’t resolve cleanly, information that requires the learner to make a judgment rather than recall a fact, endings that leave something unresolved. The Audience Architect designs for the latter and accepts the risk that completion rates will be harder to celebrate.
A curriculum designed for completion is designed to be finished, not to change the learner. The Audience Architect designs for behavior change and accepts the risk that completion rates will be harder to celebrate.
• • •
THE AI DIMENSION
AI makes the Audience Architect role more important and more difficult simultaneously.
More important because AI accelerates content production to the point where the question of what to build becomes entirely separate from the question of how to build it. When production is fast and cheap, the design of the sequence is the only leverage point left. An organization that can generate a new module in an afternoon needs someone whose job is to ask, before the generation starts: does this module belong in the sequence? Where? What does it assume the learner already knows? What does it set up for what comes next?
More difficult because AI tools are optimized for individual content generation, not for sequence design. They are very good at producing a module. They are not good at knowing whether that module should exist, where it belongs in a larger arc, or what it will produce in the learner who encounters it at a specific point in their journey. The Audience Architect cannot delegate sequence design to AI. They can use AI to produce the content that the sequence calls for. The sequence itself is a human judgment.
There is also a subtler AI risk here. AI-generated content tends toward completeness: covering everything relevant to a topic, answering questions the learner hasn’t asked yet, front-loading information that would be more powerful if withheld until the learner has had an experience that makes them ready for it. This tendency undermines the season model, which depends on strategic withholding — on knowing what the learner doesn’t need to know yet, and trusting that the sequence will deliver it at the right moment. The Audience Architect edits AI output with this in mind: not just removing what’s wrong, but removing what’s not yet needed.
• • •
MAPPING THE JOURNEY
The Audience Architect’s primary deliverable is a sequence map: a document that shows, for every curriculum the department produces, the arc from beginning to end. Not just the list of modules but the logic of the sequence. What the learner knows at each stage. What question each module creates and what question it answers. What the learner is capable of at the end that they were not capable of at the beginning.
Building this map is the work that most L&D teams skip because it feels like planning overhead on top of production work. It is not overhead. It is the design work that makes the production work meaningful. A team without a sequence map is producing modules that may or may not fit together, hoping that the sequence will emerge from the collection. It doesn’t.
The map also functions as a communication tool. When a stakeholder requests a new module — and they always do — the Audience Architect can place that request in the map and ask two questions: where does this belong in the sequence, and what does it displace? These questions change the nature of the conversation. Instead of “should we build this?” the question becomes “where does this fit in what we’re building?” The map makes the curriculum visible to people who have never thought about it as a designed sequence, and visibility changes what they ask for.
The learner on Monday morning, working through a program designed by an Audience Architect, feels something different from the learner working through a collection. She feels that someone thought about her. That the sequence she’s moving through was designed for someone in her position, at this stage of her development, building toward something specific. That the organization she has joined considers her learning a design problem worth solving carefully.
That feeling is not incidental. It is the message the curriculum sends when an Audience Architect is running the show.
• • •
The question to ask your organization this week: Pick your most important curriculum. Can you draw the arc — what the learner knows at each stage, what question each module creates, what the learner can do at the end that they couldn’t do at the beginning?
CHAPTER SIX — SHOWRUNNER ROLE 4 OF 11
Cliffhanger Engineer
Engineering the end of every module to make the learner need the next one
There is a moment, somewhere in the middle of building a piece of learning content, when the team asks: how should this end?
In most organizations, the answer is implicit rather than considered. The content ends when the content is finished. The last learning objective has been addressed. The summary slide recaps the key points. A knowledge check confirms retention. The module closes with a screen that says something like “Congratulations — you’ve completed Module 3 of 8.”
The learner closes the tab.
Whether they open Module 4 tomorrow, next week, or never is not a question the team has designed for. The assumption, unstated but structural, is that completion is the learner’s responsibility. The content’s job is to deliver information. What happens next is up to the learner.
This assumption is wrong. Not morally wrong — practically wrong. It misunderstands what drives human behavior, what creates the experience of wanting to continue, and what the last ten seconds of any piece of content are actually for.
The last ten seconds are not a summary. They are a hook.
• • •
THE ROLE
The Cliffhanger Engineer is the person responsible for designing the end of every piece of content with the same intentionality that goes into designing the beginning. Their job is to ensure that the learner who reaches the end of a module leaves with something unresolved — a question they want answered, a tension they want released, a capability they can feel themselves on the verge of but not quite possessing yet.
The word “cliffhanger” comes from serial fiction: the practice of ending an installment at the moment of highest tension, when the outcome is genuinely uncertain and the reader or viewer has no choice but to return. The cliffhanger is an engineered compulsion. It works because human attention is not passive — we are wired to complete incomplete things, to resolve open loops, to find out what happens next. The cliffhanger exploits this wiring deliberately.
Learning content can exploit it too. Not by manufacturing false suspense or withholding information manipulatively, but by ending at the natural moment of maximum forward momentum — when the learner has just enough to act but not quite enough to act well, when the question the module has raised is alive in their mind but unanswered, when the thing they’ve just learned has revealed something they didn’t know they didn’t know.
That moment is not an accident. It is a design decision. The Cliffhanger Engineer makes it on purpose.
The last ten seconds of a piece of content are not a summary. They are a hook. The Cliffhanger Engineer designs the end with the same intentionality as the beginning — because what drives the learner to return is engineered, not assumed.
• • •
THE MECHANICS OF THE HOOK
There are several reliable structures for ending content in a way that creates forward momentum rather than closure. The Cliffhanger Engineer has a working vocabulary of all of them and chooses the right one for the content at hand.
The unanswered question is the most direct. The module raises a question explicitly — poses a scenario the learner doesn’t yet have the tools to resolve, or surfaces a tension that the next module addresses. The learner leaves knowing that the answer exists and knowing where to find it. This works best when the question is genuinely interesting and when the learner can feel the gap between what they know now and what they’ll know after the next module.
The partial capability is subtler. The module teaches something the learner can begin to apply but can’t yet apply fully. They leave with a new tool that only works in limited circumstances, aware that they’re not yet competent with it, aware that competence is available and proximate. The next module gives them what they need to close the gap. This works best for skill-based learning, where the learner can feel the incompleteness in themselves rather than just in the content.
The revealed complexity is the most intellectually engaging. The module has been teaching something the learner thought they understood. At the end, it reveals that the situation is more complex than they’d been assuming — that there are cases where the rule doesn’t hold, exceptions that matter, a layer of nuance that changes the picture. The learner leaves with their previous understanding slightly destabilized, curious about the fuller picture. This works best when the complexity is genuine and the destabilization is productive rather than overwhelming.
The immediate application is the most practically effective for behavior change. The module ends by sending the learner into the world with a specific, small task: try this in your next meeting, notice this the next time you’re in this situation, ask your manager this question before our next session. The learner leaves with an assignment rather than a summary. When they return to the content, they return with experience — with something that happened as a result of the last module, which makes the next one land differently.
These are not mutually exclusive. The best content endings combine more than one of them. But they share a common logic: the end of the module is not the end of the experience. It is the beginning of what comes next.
• • •
WHAT IT LOOKS LIKE DONE WELL
A healthcare organization building a patient communication curriculum for clinical staff made a decision early in the design process that changed the entire program: every module would end mid-conversation.
Not literally — the modules were complete in themselves, covering the concepts and techniques each one was designed to teach. But the final scenario in each module deliberately stopped at the moment of highest complexity. A difficult patient question. A family member whose distress was escalating. A situation where the clinical truth and the patient’s emotional need were pulling in different directions. The learner saw the setup and the first beat of the response. They did not see the resolution.
The resolution was the opening of the next module.
The effect was immediate and measurable. Voluntary continuation rates — the percentage of learners who opened the next module within 48 hours of completing the previous one, without a reminder or assignment — went from 34 percent to 71 percent. The team hadn’t changed the content. They hadn’t changed the platform. They hadn’t added gamification or incentives. They had changed the ending.
The Cliffhanger Engineer on this project was a curriculum designer who had spent several years writing for television before moving into L&D. She brought one specific habit from that background: she never wrote an ending without first asking what the audience needed to feel in order to want to come back. Not what they needed to know. What they needed to feel. The distinction produced different endings than the team had been writing before, and different completion patterns than the organization had seen before.
• • •
WHAT IT LOOKS LIKE DONE BADLY
The most common failure of content endings is the recap. The module ends with a summary of what was just covered: three key points, restated in bullet form, often read aloud by a narrator who sounds as though they are also grateful this is ending.
The recap is the enemy of forward momentum. It signals that the experience is complete — that there is nothing left unresolved, no question worth carrying into the next module, no gap between where the learner is and where they could be. It is the content equivalent of turning the lights on in a cinema while the credits are still rolling. The spell breaks. The learner closes the tab. Whatever momentum had been building dissipates into the confirmation that this is now over.
Recaps have a legitimate function in long-form content — at the end of a multi-day program, when the learner genuinely needs to consolidate what they’ve covered before moving on. They do not belong at the end of a module in a sequence. At that moment, the goal is not consolidation. It is continuation.
The second failure is the motivational close. The module ends with an affirmation: “You’re now equipped with the tools to handle this situation with confidence.” Or: “Great work — you’re one step closer to mastery.” These closings are not wrong, exactly. They are empty. They do not create a reason to return. They close the loop rather than leaving it open. The learner feels finished rather than curious, capable rather than on the verge of something. Finished and capable are good feelings. They are not the feelings that drive behavior.
The third failure is designing endings in isolation. In most production processes, the end of a module is written by the same person who wrote the beginning — the instructional designer assigned to that module, working within the scope of that module, without necessarily knowing what comes next or what the learner will encounter when they arrive at the next module. This produces endings that are locally coherent and sequentially inert. The module ends well on its own terms. It does nothing to pull the learner into the next one.
The Cliffhanger Engineer works across modules, not within them. They think about endings in the context of beginnings: what does the learner need to be carrying when they open the next module? What question should be alive in their mind? What should they be slightly impatient to find out? These questions can only be answered by someone whose scope is the sequence, not the individual piece.
The recap is the enemy of forward momentum. It signals that the experience is complete — that there is nothing left unresolved. The Cliffhanger Engineer works across modules, not within them.
• • •
THE AI DIMENSION
AI generates content endings the way it generates content beginnings: by pattern-matching to what content endings typically look like. What content endings typically look like is a recap. The average of all the learning content an AI model has processed trends heavily toward summary, consolidation, and closure. These are the default endings. They are the endings AI will produce if not specifically directed otherwise.
The Cliffhanger Engineer’s job, in an AI-augmented production environment, is to rewrite every ending that AI generates. Not because the AI’s endings are wrong — they are often technically correct. But because correct is not the goal. The goal is compelling. The goal is a learner who closes the module with something unresolved in their mind and a reason to return.
This is a small intervention with a large effect. The content itself may be AI-generated. The ending is human-designed. The ending is where the Cliffhanger Engineer’s attention lives.
There is a deeper point here about the relationship between AI and human creative judgment in content production. AI is excellent at producing the middle of things — the explanation, the example, the scenario, the information. It is weakest at the edges: the opening that creates genuine curiosity rather than just stating a learning objective, and the ending that creates genuine compulsion rather than just summarizing what was covered. These edges are where the Cliffhanger Engineer and the Audience Architect do their most important work. The middle can be generated. The edges must be designed.
• • •
TEACHING THE TEAM
The Cliffhanger Engineer’s most leveraged work is not rewriting individual endings. It is changing the question the team asks when they reach the end of a module in production.
The current question, in most L&D departments, is: have we covered everything? The Cliffhanger Engineer introduces a second question: what does the learner leave wanting? These questions are not in conflict. They are sequential. Cover everything, then decide what to leave unresolved. The second question does not require additional content. It requires a different orientation to the content that already exists — a willingness to end before the full resolution, to trust the learner to carry an open question, to design for the next module’s opening rather than the current module’s closing.
Once a team has internalized this question, they start catching their own recap endings before the Cliffhanger Engineer has to catch them. The design sensibility spreads. The endings improve across the curriculum, not just in the modules the Cliffhanger Engineer has personally touched.
That spread is the goal. The Cliffhanger Engineer is not a quality control function at the end of the production line. They are a design influence at the beginning of it — changing what the team reaches for when they sit down to build something, so that what they build pulls the learner forward rather than releasing them.
• • •
The question to ask your organization this week: Pick the last module your team shipped. What does the learner leave wanting — and did you design for that, or did you design for completion?
CHAPTER SEVEN — SHOWRUNNER ROLE 5 OF 11
Casting Director
Who delivers content is a creative decision that changes everything — and almost nobody makes it on purpose
The module on psychological safety has been in the queue for three months.
Everyone agrees it’s important. The culture survey flagged psychological safety as the area where the organization most needs to grow. The CHRO has made it a priority. The L&D team has built the content: a well-researched, clearly scripted module that defines psychological safety accurately, explains why it matters, and walks managers through three concrete practices for building it on their teams.
The question of who delivers it has been answered by default. The subject matter expert is a senior organizational development consultant who helped design the research framework behind the survey. She knows more about psychological safety than anyone else in the organization. She is the obvious choice.
The module ships. Managers complete it. In the follow-up survey six months later, psychological safety scores are unchanged.
Nobody asks the casting question.
The casting question is this: of all the people in this organization who could deliver this content, who is the right person to deliver it to this audience at this moment — and why?
For a module about psychological safety, the answer is almost certainly not the organizational development consultant. She has credibility. She does not have the kind of credibility that changes behavior. The managers watching her already believe that psychological safety matters — the culture survey told them so. What they don’t believe, in the way that actually changes how they run their next team meeting, is that it’s possible for someone like them to build it.
The person who could make them believe that is another manager. Someone three levels above them who could talk about the time they got this wrong, what it cost, and what they did differently. Someone who has the credibility of having navigated the same pressures and made the harder choice. The consultant can teach psychological safety. The manager can make it feel possible.
These are different deliverables. Only one of them changes behavior.
• • •
THE ROLE
The Casting Director is responsible for the decision of who delivers every piece of content the department produces. Not who is available. Not who volunteered. Not who the stakeholder wants on camera. Who is the right person to carry this message to this audience at this moment, given what the content is trying to produce in the learner.
In film and television, casting is considered one of the highest-leverage decisions in production. A brilliant script with wrong casting produces a mediocre film. A good script with right casting can produce something transcendent. The performance is not just delivery — it is credibility, presence, the particular quality of attention that one human being gives to another, filtered through the specific history and identity of the person doing the giving. These things are not interchangeable. They cannot be scripted in.
The same is true in learning content, and for the same reasons. What a learner receives from content is not just information. It is information delivered by a person, and the person is part of the message. The learner is always, consciously or not, asking: why should I believe this? Why should I care? Why is this person the one telling me? The answer to those questions is determined by who was cast, and most L&D departments are not asking them.
The performance is not just delivery — it is credibility, presence, the particular quality of attention that one human being gives to another. These things cannot be scripted in. Casting is the decision that determines whether the content has any chance of working.
• • •
THE CASTING VOCABULARY
The Casting Director works with a vocabulary of delivery types, each with different effects on the learner. Understanding which type serves which content is the core of the role.
The Peer carries the credibility of shared experience. When a sales manager teaches sales managers, the implicit message is: this person has sat where you’re sitting, has faced what you’re facing, and has figured something out worth passing on. This credibility is powerful for content about skill development and behavior change, where the learner’s most significant resistance is often “this works in theory but not in my actual situation.” A peer dissolves that resistance in a way an expert cannot.
The Expert carries the credibility of depth. When a specialist teaches their specialty, the implicit message is: this person has spent years on this question and knows things you don’t. This credibility is powerful for content about technical knowledge, research-backed frameworks, or any situation where the learner needs to trust that the information is accurate and complete. The expert’s limitation is relatability — the learner may trust the expertise while doubting the applicability to their own context.
The Leader carries the credibility of authority and consequence. When a senior executive delivers content, the implicit message is: this organization considers this important enough to put its most senior people in front of you. This credibility is powerful for content that requires the learner to understand that something is a genuine organizational priority rather than an L&D initiative. The leader’s limitation is that their distance from the learner’s daily reality can undermine authenticity. A CEO talking about the importance of work-life balance while the learner works sixty-hour weeks is a casting decision that actively damages the content.
The Character carries the credibility of pure narrative. A fictional person in a scenario — well-written, specifically drawn, placed in a situation that the learner recognizes from their own experience — can demonstrate things that a real person cannot. Characters can fail explicitly, can model the wrong behavior without anyone losing face, can be placed in situations that are too sensitive for real people to occupy. The character’s limitation is that the learner always knows they’re watching fiction, which limits the emotional transfer. The best use of characters is for the initial problem demonstration, with real people taking over for the resolution and the reflection.
The Learner’s Own Voice is the most powerful and most underused delivery type. Content that prompts the learner to generate their own examples, articulate their own beliefs, or apply concepts to their own situations produces stronger retention and behavior change than any externally delivered content. The casting decision here is to cast the learner themselves. This requires a fundamentally different design approach — less delivery, more prompting — but it is available and its effects are well-documented.
• • •
WHAT IT LOOKS LIKE DONE WELL
A global technology company redesigning their manager development program made casting the first design decision rather than the last.
Before a single module was scripted, the Casting Director — a learning designer with a background in documentary film — spent four weeks interviewing managers at every level of the organization. She was not gathering content for the curriculum. She was auditing the human library: finding the people inside the organization whose specific experiences, told in their own voices, would carry the content more powerfully than any external expert or internal trainer.
What she found was a set of managers who had each navigated a specific kind of difficult situation — a team in conflict, a performance problem that turned out to be a mental health issue, an inherited dysfunction that took two years to resolve — with enough reflection and enough honesty to tell the story in a way that was genuinely useful to someone facing a similar situation for the first time. These were not polished speakers. They were not organizational development experts. They were credible in the specific, irreplaceable way that only comes from having actually done the thing.
The curriculum was built around their stories. Not as decoration — as the primary delivery mechanism. The expert content existed to frame and extend what the managers had shared. The scenarios were built from their actual situations. The assessments asked learners to compare their own contexts to the contexts they’d heard described. The completion rate for this program was the highest the company had recorded for any voluntary development curriculum. The qualitative feedback said the same thing in different words: it felt real.
Casting it with the right people was the reason.
• • •
WHAT IT LOOKS LIKE DONE BADLY
The most damaging casting failure in corporate L&D is the default to the subject matter expert.
Subject matter experts are cast by default because they are the obvious answer to the question “who knows this?” They do know it. That is not the only question that matters. The question that matters is who should teach it — and “who knows it best” and “who should teach it” have different answers more often than most L&D teams realize.
Subject matter experts have several common limitations as delivery vehicles. They tend to overload content with the full depth of their knowledge, including information the learner doesn’t need at this stage of their development. They tend to underestimate the distance between their level of expertise and the learner’s starting point, producing explanations that skip steps the expert has long since internalized. They tend toward the explicit, explaining rather than demonstrating, telling rather than showing. And they tend to underestimate the degree to which their credibility with the learner depends on factors other than expertise: shared experience, shared identity, shared struggle.
None of this means subject matter experts should not deliver content. It means the casting decision should be made deliberately, with an honest assessment of what the learner needs in order to receive the information rather than just who has the information to give.
The second casting failure is the executive mandate. A senior leader decides that a topic is important and makes themselves the face of the content. This sometimes works: a leader with genuine credibility on a topic, speaking with authentic conviction, can produce content that an instructional designer could never produce. More often it produces content that is technically accomplished and emotionally inert — a senior person performing sincerity about a topic they have assigned others to care about. The learner can feel the difference. The Casting Director’s job is to have the conversation that redirects the executive’s energy toward a role where their contribution is genuine rather than obligatory.
The third failure is treating all delivery types as interchangeable based on availability. The Casting Director casts for the role, then finds the person. Most L&D departments find the person first, then write the role around them. These processes produce different content, and the difference is felt by the learner even when they cannot articulate why.
The Casting Director casts for the role, then finds the person. Most L&D departments find the person first, then write the role around them. The learner feels the difference even when they cannot articulate why.
• • •
THE AI DIMENSION
AI has introduced a new category of casting decision that most L&D departments are navigating without a framework: the synthetic human.
AI-generated avatars and voices can now deliver content with a level of production quality that was previously achievable only with significant investment in on-camera talent. They are consistent, available, inexpensive, and immune to the scheduling constraints and performance variability of real people. They are also, in a specific and important way, nobody.
A synthetic presenter carries no credibility of experience. They have no history with the learner’s industry, no scars from the situations they’re describing, no particular reason to be trusted beyond the quality of their rendering. For content where the delivery vehicle is relatively neutral — procedural information, technical instructions, compliance requirements — this limitation is minor. For content where the learner’s resistance is primarily about whether this applies to someone like them, it is disqualifying.
The Casting Director’s framework applies to synthetic presenters as it applies to human ones: what does this learner need from the person delivering this content, and can a synthetic presenter provide it? Sometimes the answer is yes. Often it is not. The decision should be made deliberately rather than by default — synthetic because it’s available and cheap is a different decision than synthetic because this content genuinely does not require the credibility that only a real person can provide.
The best use of AI in the casting decision is not as a presenter but as a production resource: generating the scripts that real people will deliver, creating the scenarios that human characters will inhabit, producing the supporting content that frames and extends the human delivery. AI handles the production. Humans carry the credibility. The Casting Director decides where the line falls.
• • •
THE CASTING CONVERSATION
The Casting Director’s most important skill is not knowing who to cast. It is knowing how to have the conversation that changes who gets cast.
Most casting decisions in L&D are made before the L&D team is involved. The stakeholder has a topic, an expert, and an implicit assumption that the expert will deliver the content. The L&D team’s job, in this default model, is execution: take the expert’s knowledge and turn it into a module. The casting decision has already been made.
The Casting Director intervenes before this default sets. They ask the casting question early — in the intake conversation, before the design process begins. They make the question feel natural rather than challenging: not “your subject matter expert is wrong for this” but “what does this learner need to believe in order for this content to work, and who in this organization is most credible on that specific question?”
This question reframes the conversation from content delivery to behavior change. It moves the stakeholder from “who knows this?” to “who can change this?” Those are different questions. They often have different answers. And the person who surfaces that difference — early, as a design question rather than a production problem — is the Casting Director.
• • •
The question to ask your organization this week: For your most important piece of content currently in production: who is delivering it, and is that person the right choice — or the available one?
CHAPTER EIGHT — SHOWRUNNER ROLE 6 OF 11
Writers Room Facilitator
Running the room when AI is the fastest writer but the worst editor of its own work
The script took eleven minutes to generate.
An instructional designer on a healthcare team had used an AI tool to produce a first draft of a patient safety module. She fed it the learning objectives, a summary of the source material, and the target audience profile. Eleven minutes later she had a complete script: twelve hundred words, clearly organized, accurate, with a scenario, a subject matter expert section, a knowledge check, and a summary.
She read it twice. It was correct. It covered everything it was supposed to cover. The scenario was plausible. The summary hit all the learning objectives. The tone was professional and appropriately serious.
It was also, somehow, nobody.
She couldn’t quite name what was wrong. The script didn’t sound like their organization. It didn’t sound like the subject matter expert who would be delivering it. It didn’t have any of the specific texture — the particular way their department talked about patient safety, the phrases their clinicians actually used, the examples that would land because they came from this hospital rather than from the averaged experience of every hospital the model had ever processed.
She spent two hours revising. By the time she was done, about forty percent of the original text remained. The rest had been replaced with language that sounded like the organization she worked in, delivered by the person who would be on camera, building toward the outcome the content was actually designed to produce.
Eleven minutes to generate. Two hours to make it real.
The Writers Room Facilitator is the person who closes that gap systematically rather than project by project.
• • •
THE ROLE
The Writers Room Facilitator is responsible for the process by which content gets written — whether that writing is done by humans, by AI, or by some combination — and for ensuring that what comes out of that process sounds like the organization it represents and serves the audience it was built for.
The television writers room is one of the most productive creative environments ever designed. A group of writers, working together under the guidance of a Showrunner or senior writer, generates, debates, and refines story ideas at a speed and quality that no individual writer working alone can match. The room has rules: every idea gets heard, nothing is personal, the standard is what serves the story rather than what any individual writer prefers, and the Showrunner has the final word. Within those rules, the room is generative in a way that individual work rarely is.
The L&D writers room looks different but operates on the same principles. It may not be a literal room. It may be a structured review process, a set of prompting conventions for AI tools, a workflow that moves content through a sequence of human judgment before it ships. What makes it a room rather than a pipeline is the presence of a facilitator whose job is not to produce content but to run the process that produces good content — to prompt well, kill ideas that don’t serve the argument, maintain the voice of the series across every piece of writing that passes through, and make the call when something is close but not right.
What makes it a room rather than a pipeline is the presence of a facilitator whose job is not to produce content but to run the process that produces good content — prompting well, killing what doesn’t serve, making the call when something is close but not right.
• • •
WHAT THE FACILITATOR ACTUALLY DOES
The Writers Room Facilitator does five things that nobody else in the production process is positioned to do.
They prompt with intent. AI tools produce output that reflects the quality of the input they receive. A vague prompt produces generic content. A specific, well-constructed prompt — one that encodes the organization’s voice, the learner’s context, the specific argument the content needs to make, and the particular way this organization talks about this subject — produces content that is genuinely useful rather than generically correct. Prompting well is a craft. It requires a deep understanding of both what the content needs to accomplish and how to translate that understanding into language a tool can act on. The Writers Room Facilitator develops this craft deliberately and applies it consistently.
They kill with confidence. The most important editorial skill in any writing process is knowing what to cut. This is true whether the writing is done by humans or by AI, and it is harder with AI because AI output rarely contains obvious errors. It contains subtler failures: the correct information in the wrong order, the accurate example that doesn’t land for this audience, the professionally appropriate tone that happens to be completely wrong for this organization’s voice. Killing these failures requires confidence — the willingness to remove something that is defensible in isolation because it doesn’t serve the whole. The Writers Room Facilitator has this confidence because they have been given the authority to exercise it.
They hold the voice. Across a curriculum that might involve dozens of modules, produced over months, by a team with natural turnover, with AI tools that trend toward the generic default — the voice drifts. Modules produced in January sound different from modules produced in October. The facilitator is the person who notices the drift and corrects it, not by enforcing a rulebook but by maintaining a live sense of what the content should sound like and catching the moments when it doesn’t.
They manage the room. In a multi-person writing process — which most L&D production involves, even if it’s not structured as a room — the facilitator manages the dynamics that determine whether the process is generative or political. The subject matter expert who wants to include everything they know. The stakeholder who wants to soften the message. The designer who has strong preferences that are about personal taste rather than the audience’s needs. The facilitator redirects these forces without making enemies, because they are always arguing from the same position: what does the learner need, and what serves that?
They develop the team. The highest leverage work of the Writers Room Facilitator is not producing any individual piece of content. It is raising the writing quality of the entire team over time. By modeling good prompting, by naming the specific failures they’re catching and explaining why they’re failures, by creating a shared vocabulary for what good writing looks like in this specific context, the facilitator gradually improves the quality of what the team brings to the room. The room becomes less dependent on the facilitator’s corrections because the team has internalized the standard. This is the goal.
• • •
WHAT IT LOOKS LIKE DONE WELL
A retail organization with a high-volume content operation — producing more than two hundred pieces of new learning content per year for frontline employees — was the first team in their industry to build a structured writers room process around AI tools.
The Writers Room Facilitator they hired had a background in journalism and branded content. She had never worked in L&D before. She was hired because she could write quickly, edit ruthlessly, hold a voice across a large body of work, and run a room. These turned out to be precisely the skills the role required.
Her first project was building the prompt library: a set of carefully tested prompts for each content type the team produced, each one encoding the organization’s voice, the learner profile, the structural requirements of that content type, and a set of explicit instructions about what the organization’s content does and does not do. Building the library took six weeks and required her to produce and evaluate hundreds of AI outputs before she had prompts she trusted.
The second project was establishing the kill criteria: a short, specific list of the failures that automatically sent content back for revision. Not a comprehensive quality rubric — five criteria, each one named and described with an example of a real failure. The team could apply these criteria themselves. They did. The volume of content that reached the facilitator for final review dropped by sixty percent within three months, because the team was catching the most common failures before they escalated.
The third project, ongoing, was the voice audit: a monthly review of a random sample of recently shipped content against the organization’s bible. The audit caught drift before it became entrenched. It produced a monthly report that the team read as a calibration document rather than a performance review. Over two years, the voice of the organization’s learning content became more consistent than it had ever been, despite the volume of AI-generated first drafts passing through the process.
The facilitator’s title was Senior Content Strategist. Her actual role was Writers Room Facilitator. The organization didn’t have a name for what she was doing. They knew it was working.
• • •
WHAT IT LOOKS LIKE DONE BADLY
The most common failure of the writers room in L&D is the absence of one.
Most L&D teams do not have a structured writing process. They have a production process: a sequence of steps from content brief to published module. Writing happens somewhere in the middle of that process, assigned to whoever is responsible for the project, completed according to a timeline, reviewed by a subject matter expert and a stakeholder, and shipped. The review is for accuracy. Nobody is reviewing for voice, for argument quality, for whether the writing serves the learner or merely covers the material.
The introduction of AI tools into this process makes the absence more consequential. AI produces first drafts quickly enough that teams are tempted to skip the writing process entirely — to treat the AI output as a draft rather than a starting point, to move from generation to review to ship without a meaningful editorial intervention in between. The content that results from this process is not bad in any obvious way. It is bad in the way that only becomes visible over time, as the accumulated weight of generic, voice-less, technically adequate content gradually erodes the learner’s trust that any of it is worth their attention.
The second failure is the facilitator who facilitates without authority. A person who reviews content and suggests changes but has no standing to require them is not a Writers Room Facilitator. They are a commenter. Their suggestions go into a document. The subject matter expert accepts the ones they agree with and declines the ones they don’t. The stakeholder does the same. The writing ends up reflecting the preferences of whoever had the most authority in the review process, which is rarely the person who knew the most about what the learner needed.
Writing authority is not popular. Telling a subject matter expert that their preferred explanation is too complex for this audience, or telling a senior leader that their preferred framing undermines the content’s credibility with the people who will receive it, requires the kind of standing that most L&D teams do not give their writers. The Writers Room Facilitator needs that standing. Without it, the room is not a room. It is a review meeting where everyone’s opinion counts equally, which means the content ends up reflecting the average rather than the best.
Writing authority is not popular. Telling a subject matter expert their explanation is too complex, or telling a senior leader their framing undermines the content, requires standing that most L&D teams do not give their writers. Without it, the room is not a room.
• • •
THE AI DIMENSION
The Writers Room Facilitator is the role most directly transformed by AI — not replaced, transformed.
Before AI, the writers room was primarily about generation: helping a team produce more and better writing than any individual could produce alone. After AI, generation is fast and cheap. The writers room is now primarily about curation, calibration, and voice: taking the enormous volume of generated content and ensuring that what makes it into the world sounds like the organization it represents and serves the learner it was built for.
This is a harder job in some ways and an easier one in others. Harder because the volume is higher and the failures are subtler — AI doesn’t produce obviously wrong content, it produces plausibly correct content that may be wrong for this specific context. Easier because the facilitator no longer needs to generate content themselves, which frees their attention for the judgment work that only they can do.
The specific AI skill the Writers Room Facilitator develops is prompt engineering as editorial practice. Not the technical aspects of prompt construction — those are learnable quickly. The editorial aspect: knowing what to ask for in a way that produces content worth editing rather than content that needs to be replaced. This requires the facilitator to have a clear model of what good content looks like before they prompt, and the ability to translate that model into instructions specific enough that the tool can act on them. It is the same skill a good editor brings to a conversation with a writer: knowing what you want before the writer has written anything, and knowing how to ask for it in a way that produces what you need.
• • •
The question to ask your organization this week: When your team generates a first draft — by AI or by a human writer — who is responsible for ensuring it sounds like your organization? Is that person’s authority to change it real, or nominal?
CHAPTER NINE — SHOWRUNNER ROLE 7 OF 11
Production Economist
Allocating the L&D budget like a production slate, not a service catalog — and having the authority to make that argument and win it
Every year, the L&D budget gets divided.
The division follows a logic that nobody designed and nobody would defend if asked to defend it explicitly. Compliance training gets funded because it has to. Onboarding gets funded because the business demands it. The remaining budget gets distributed across requests submitted by business units, prioritized by seniority of the requestor, urgency of the ask, and whatever the organization is focused on this quarter. The result is a portfolio of content that reflects the organization’s politics more than its learning needs.
Each project gets roughly what it needs to be completed. None of them gets what it would need to be exceptional. The budget is spread evenly enough that everything gets made, and thinly enough that nothing gets made well.
This is the service catalog model. The L&D department is a service provider. Stakeholders submit requests. The department fulfills them. Budget allocation follows the requests. The department’s job is execution, not judgment. The question of whether the portfolio of projects represents the best use of available resources is not a question anyone is asking.
The Production Economist asks it.
• • •
THE ROLE
The Production Economist is responsible for treating the L&D budget as a production slate rather than a service queue — for making deliberate decisions about where to concentrate resources, where to go deliberately minimal, and what not to fund at all, in service of the learning outcomes that matter most to the organization.
In film and television, the production slate is the set of projects a studio or network has committed to making in a given period. Slate decisions are strategic: they reflect a theory about what the audience wants, what the competitive landscape demands, what kind of work the studio does best, and where investment is most likely to generate return. Some projects get greenlit with full resources because they are the ones that will define the studio’s identity. Some get made on lean budgets because they are worth making but not worth betting the house on. Some don’t get made at all, because the best studios understand that their most valuable resource is not money but attention, and attention spent on a weak project is attention not spent on a strong one.
The L&D Production Economist applies this thinking to the annual budget. Not every project deserves equal investment. Some learning challenges are genuinely strategic — the capability that will determine whether the organization can execute its most important initiative, the behavioral change that is the actual bottleneck to performance improvement, the onboarding experience that will determine whether new hires stay or leave in their first ninety days. These projects deserve flagship-level investment. They should be produced at the highest quality the budget can sustain, with the best talent, the most thoughtful design, the most rigorous measurement.
Other projects are maintenance: compliance requirements, procedural updates, reference material that employees will search for rather than be assigned. These projects deserve efficient execution. They should be produced at the quality level required for them to function, no more. Spending flagship resources on a regulatory update is not good stewardship. It is the absence of judgment about where quality changes outcomes and where it doesn’t.
The Production Economist knows the difference. More importantly, they have the authority to act on it.
Not every project deserves equal investment. The Production Economist knows the difference between flagship content that defines what the organization can become and maintenance content that keeps the lights on. More importantly, they have the authority to act on it.
• • •
THE PRODUCTION SLATE FRAMEWORK
The Production Economist works from a framework that categorizes every content project in the portfolio before resources are allocated. The categories are not based on the subject matter or the requesting stakeholder. They are based on the relationship between investment and outcome.
Flagship projects are the ones where quality changes behavior at scale. The new manager onboarding program at a company where half of first-year attrition is driven by bad management is a flagship project. The sales methodology training at a company whose top revenue priority is moving upmarket is a flagship project. The safety training at a company where a recordable incident has real operational and reputational consequences is a flagship project. These projects get full investment: the best design thinking, the most skilled facilitators or on-camera talent, the most rigorous measurement framework, the longest development timeline. They are produced to a standard that reflects the stakes.
Standard projects are the ones where quality matters but the marginal return on additional investment is low. A mid-level product training for a stable product with an experienced salesforce. A refresher course for a skill the team already has but needs to maintain. A culture communication piece that reinforces values the organization has already established. These projects get competent execution. They benefit from good design and clear writing. They do not benefit from production investment that pushes them toward flagship quality, because the audience’s need does not require it.
Lean projects are the ones where the primary requirement is accuracy and accessibility, not engagement. Compliance modules that satisfy a regulatory requirement. Reference documentation formatted as a learning experience. Updates to existing content that reflect policy changes rather than capability gaps. These projects get the minimum viable investment required for them to function. A lean project produced efficiently is a good lean project. A lean project produced at flagship cost is a budget failure.
The fourth category is the one most organizations never name: projects that should not be made at all. The stakeholder request for content that addresses a problem content cannot solve. The module that would take two months to produce and be completed once by thirty people. The training response to a performance issue that is actually a management issue, a process issue, or a hiring issue. The Production Economist is the person who identifies these projects early and redirects the resources they would have consumed toward something that will actually work.
• • •
WHAT IT LOOKS LIKE DONE WELL
A professional services firm with a twelve-person L&D team and a fixed annual budget of two million dollars had been running a service catalog model for six years. At the end of each year, they could point to more than two hundred pieces of content produced. They could not point to a single business outcome they were confident the content had caused.
A new CLO arrived with a simple proposition: we are going to make fewer things and make them matter. She hired a Production Economist — a learning strategist with a background in business consulting who had spent three years at a major studio in a production finance role before moving into L&D. His job was to build the production slate.
The first slate had seven flagship projects, identified by working backward from the firm’s three-year strategic plan. Each flagship was chosen because it addressed a specific capability gap that was on the critical path to a strategic priority. Each one received a budget allocation between three and four times what the firm had previously spent on comparable projects. Each one was developed over a longer timeline, with more design iterations, more rigorous measurement, and more involvement from senior business leadership than anything the L&D team had previously produced.
The remaining budget funded twenty-two standard projects and an entirely new category: a library of lean assets produced quickly and cheaply using AI tools, covering the maintenance content that had previously consumed a third of the budget and a disproportionate share of the team’s attention.
At the end of the first year, the team had produced fewer pieces of content than in any previous year. They had also produced the highest-impact content in the firm’s history — measured not by completion rates but by the specific behavior changes the flagship projects were designed to produce, tracked in collaboration with the business units that owned the outcomes.
The Production Economist’s contribution was not the content. It was the framework that decided which content was worth making and at what level of investment. That framework did not require additional budget. It required the authority to say no to some requests and the analytical rigor to explain why.
• • •
WHAT IT LOOKS LIKE DONE BADLY
The service catalog model fails in a specific and predictable way: it produces a portfolio that is politically defensible and strategically incoherent.
Every project in a service catalog portfolio can be justified. The compliance training is required. The business unit requested the product training. The leadership team asked for the culture module. The new hire orientation has always been funded. There is a stakeholder behind every line item, and every stakeholder has a reason their project matters.
What the portfolio cannot do is answer the question: if we could only fund three things this year, what would they be, and why? That question exposes the absence of strategic prioritization. In a service catalog, the answer is determined by whoever asks loudest. In a production slate, the answer is determined by the theory of change that the L&D function has developed and defended.
The first failure of bad production economics is the equal distribution fallacy: the belief that spreading resources evenly across the portfolio is fair and therefore correct. It is fair in the political sense — every stakeholder gets something. It is incorrect in the strategic sense — it produces a portfolio of adequately resourced mediocrity rather than a concentrated portfolio of meaningful work. Fairness and strategy are different values, and confusing them is one of the most expensive mistakes an L&D leader can make.
The second failure is the cost-per-module metric. Organizations that measure L&D efficiency by the cost of producing a module are optimizing for the wrong thing. A module that costs forty thousand dollars to produce and changes a behavior that improves revenue by two million dollars is a better investment than a module that costs four thousand dollars and changes nothing. Cost per module is a production metric. The Production Economist works in a different unit of measurement: investment relative to outcome. These metrics produce different decisions, and the difference compounds over years.
The third failure is the inability to say no. The Production Economist’s most important function is declining projects that do not merit the investment they would require. This is politically difficult. Stakeholders are not accustomed to being told that their content request will not be funded. They are especially not accustomed to being told that their request will not be funded because the L&D department has decided to concentrate resources elsewhere. The Production Economist needs the organizational standing to have this conversation and the analytical framework to win it — to make the case not that the request is unimportant, but that the alternative use of resources is more important, with evidence that the business finds compelling.
The Production Economist’s most important function is declining projects that do not merit the investment they would require. They need the organizational standing to have this conversation and the analytical framework to win it.
• • •
THE ROI CONVERSATION
The Production Economist changes the nature of the ROI conversation in L&D — a conversation that has been frustrating the function for decades.
The traditional L&D ROI problem is this: learning outcomes are difficult to isolate, behavior change is slow and multifactorial, and the business is often unwilling to wait for the rigorous measurement that would produce defensible numbers. The result is that L&D teams either produce ROI claims that are not credible, or abandon the conversation and argue for value in terms that the business does not find compelling.
The Production Economist reframes the conversation before it starts. Instead of asking “what was the ROI of this content?” — a question asked after the fact about content that has already been made — they ask “what outcome would justify this investment?” — a question asked before production begins that commits the business to a specific, measurable definition of success.
This question does two things. It creates alignment on what the content is actually supposed to produce, which improves the design process by giving it a concrete behavioral target. And it creates shared accountability for the outcome, because the business leader who agreed that a specific behavior change would justify the investment is now a co-owner of achieving it. The measurement conversation becomes straightforward because the success criterion was agreed upon before a single module was built.
Not every project can be tied to a quantifiable business outcome. Compliance training exists to satisfy a regulatory requirement. Culture content exists to reinforce values that are hard to measure. The Production Economist does not require every project to have an ROI story. They require flagship projects to have one, because flagship projects are the ones where the investment is large enough that the business needs a reason to say yes rather than merely acquiesce.
• • •
THE AUTHORITY QUESTION
Everything in this chapter depends on a question that most L&D leaders will recognize immediately: does the Showrunner have the authority to make these decisions?
In most organizations, the answer is no. Budget allocation is driven by stakeholder relationships, seniority politics, and historical precedent. The L&D department’s role is to execute against the requests it receives, not to make independent judgments about which requests are worth fulfilling. The idea that an L&D leader could decline a business unit’s content request on strategic grounds, or that they could argue for concentrating resources on fewer, higher-impact projects over the objections of stakeholders who want something made, is not consistent with how most L&D functions operate.
This is the deepest problem the Showrunner role is designed to solve. The Production Economist is not just a framework for thinking about resource allocation. It is an argument for a different kind of organizational relationship between L&D and the business — one in which the learning function has genuine strategic authority rather than service provider status.
Building that authority is not a quick process. It happens through demonstrated results: flagship projects that produce measurable outcomes, a track record of investment decisions that the business comes to trust, a relationship with senior leadership based on strategic contribution rather than order fulfillment. The Production Economist framework is the tool. The authority to use it is built over time, project by project, outcome by outcome.
But it has to start somewhere. The first time the Showrunner declines a project on strategic grounds and wins the argument, something changes. The business begins to understand that the L&D function has a point of view about what works. The stakeholder who was declined either brings a stronger case next time or learns to think more carefully before submitting a request. The Production Economist’s authority grows incrementally, and the quality of the portfolio improves with it.
The Production Economist is the role that turns an L&D function from a service department into a strategic one. It is also the role that is hardest to build, because it requires the organization to accept a relationship it has not previously had with its learning function. That acceptance has to be earned. The framework in this chapter is how you earn it.
• • •
The question to ask your organization this week: If you could only fund three projects this year, what would they be — and does your current budget allocation reflect that answer?
CHAPTER TEN — SHOWRUNNER ROLE 8 OF 11
Renewal Strategist
What gets a second season, what gets cancelled, and why most L&D content never faces either question
Somewhere in your LMS right now, there is a module that nobody has opened voluntarily in two years.
You probably know which one it is. It was built for a reason that no longer applies — a product that has been updated, a process that has changed, a strategic priority that has shifted. Or it was built for a reason that still applies but was never compelling enough to make anyone seek it out. It sits in the search results. It appears occasionally in recommended learning paths. It gets completed when assigned and forgotten when not.
Now ask yourself: is there a process in your organization for deciding what happens to it?
Not a theoretical process. An actual one, with a named owner, a defined trigger, and a decision framework that results in content being renewed, rebooted, or cancelled. A process that treats your content portfolio the way a network treats its programming slate — as a set of ongoing investments that require regular evaluation and are subject to being ended when they stop delivering value.
In most organizations, there is no such process. Content is created. Content is updated when someone notices it is wrong. Content is archived when someone notices it is embarrassing. Everything else stays. The LMS accumulates. The portfolio grows. The signal-to-noise ratio declines. The learner’s experience of browsing the LMS becomes the experience of searching a warehouse rather than curating a collection.
The Renewal Strategist is the person who runs the renewal conversation. Not as an annual housekeeping exercise but as a genuine editorial function — asking, for every piece of content in the portfolio, whether it deserves to continue existing in the form it currently takes.
• • •
THE ROLE
The Renewal Strategist is responsible for the ongoing evaluation of the content portfolio against a consistent standard: is this content delivering value to the learner and the organization that justifies its continued existence?
The television analogy is exact here. Every season, networks make renewal decisions. A show that is performing — attracting the audience it was designed for, generating the engagement that justifies its budget, building toward something — gets renewed. A show that is underperforming gets cancelled, rebooted, or significantly retooled before its next season. A show that has run its natural course gets a final season and a proper ending. The decision is made by people with authority, using data and judgment, on a regular cycle.
None of this happens by accident or by complaint. It happens because someone’s job is to make it happen. In L&D, nobody’s job is to make it happen. Content exists in a state of permanent default renewal: it stays until someone actively decides to remove it, and the activation energy required to remove it is always higher than the activation energy required to leave it in place. The result is an archive masquerading as a library.
The Renewal Strategist changes the default. Instead of content staying until it’s removed, content continues only as long as it’s justified. The burden of proof shifts from removal to continuation. Every piece of content in the portfolio is on a renewal cycle, and renewal requires a reason.
The Renewal Strategist changes the default. Instead of content staying until it’s removed, content continues only as long as it’s justified. The burden of proof shifts from removal to continuation.
• • •
THE DATA THE RENEWAL STRATEGIST READS
Renewal decisions require data, but not the data most L&D teams collect. Completion rates tell you whether people finished something. They do not tell you whether it worked. A module with a ninety percent completion rate that changes no behavior is not a successful module. A module with a forty percent completion rate that reliably produces a specific skill improvement in the people who complete it may be exactly the right module for the right audience.
The Renewal Strategist reads four kinds of data, weighted differently depending on the content type.
Consumption data answers the question: are people choosing to engage with this? Not completion of assigned content — voluntary engagement, return visits, sharing behavior, search-driven discovery. Consumption data tells you whether the content has earned an audience or is simply occupying space. Low voluntary consumption of content that is not assigned is a signal that the content is not compelling enough to seek out. High voluntary consumption is a signal that the content is delivering something the learner values enough to return to.
Completion shape answers the question: where do people stop, and what does that mean? A completion curve that drops sharply at a specific point in a module is diagnostic: something at that point is losing the learner. It may be the pacing, the complexity, the relevance, or the format. Completion shape tells you not just whether content is being finished but where it is failing the learner when it does fail. This data is available in most LMS platforms and almost never analyzed.
Behavioral outcome data answers the question: did the content produce the change it was designed to produce? This is the hardest data to collect and the most important. It requires a pre-and-post measurement design, a defined behavioral indicator, and a willingness to attribute outcomes to content only when the attribution is defensible. Most L&D teams avoid this measurement because it is difficult and because the results are sometimes unflattering. The Renewal Strategist requires it for flagship content, because flagship content exists to produce outcomes and renewal decisions for flagship content must be based on whether those outcomes are being produced.
Qualitative signal answers the question: what are people saying about this? Not in formal surveys, which are blunt instruments, but in the conversations that happen around content — in team meetings, in onboarding feedback, in the comments managers make when they recommend something to their teams or warn them away from it. Qualitative signal is difficult to systematize but essential to the Renewal Strategist’s judgment. Numbers tell you what happened. People tell you why.
• • •
THE RENEWAL DECISION FRAMEWORK
The Renewal Strategist makes four kinds of decisions. Understanding which decision is appropriate for which content requires the combination of data and judgment that defines the role.
Renewal is the decision to continue a piece of content in its current form. It requires evidence that the content is delivering value — consumption data that suggests an active audience, outcome data that suggests behavioral impact, or qualitative signal that suggests the content is being recommended and used beyond its assigned context. Renewal is not the default. It is a decision that requires justification.
Reboot is the decision to rebuild a piece of content substantially, retaining the premise but overhauling the execution. A reboot is appropriate when the content’s subject matter remains relevant but its form has become outdated — the production quality is below current standards, the format no longer fits how the audience consumes content, or the instructional approach has been superseded by a more effective one. A reboot is also appropriate when consumption data suggests the content has an audience that is not being well served by the current version. Reboots are resource-intensive and should be reserved for content that has demonstrated enough value to justify the investment.
Retool is a lighter intervention than a reboot — updating specific elements of the content rather than rebuilding it from scratch. A regulatory change that requires updates to compliance content. A product update that makes a section of a training module inaccurate. A casting change because the subject matter expert who delivered the original content has left the organization. Retooling is maintenance. It keeps content functional without investing in content that has not earned a full reboot.
Cancellation is the decision to remove content from the portfolio entirely. It is the most consequential decision the Renewal Strategist makes and the one most organizations have the most difficulty executing. Content represents past investment. Cancelling it feels like admitting that the investment was wasted. It is not. A module that cost fifty thousand dollars to produce and no longer serves its audience is not made more valuable by continuing to exist. It is made more expensive, because the opportunity cost of the space it occupies — in the learner’s search results, in the team’s maintenance burden, in the signal quality of the portfolio overall — compounds over time.
A module that cost fifty thousand dollars to produce and no longer serves its audience is not made more valuable by continuing to exist. The sunk cost is not a reason to keep it. It is a reason to learn from it.
• • •
WHAT IT LOOKS LIKE DONE WELL
A technology company with a rapidly evolving product portfolio had a specific and severe version of the content decay problem: their product training was outdated almost as soon as it was built. The product changed faster than the L&D team could update the content. The LMS contained training for product versions that no longer existed, alongside training for current versions, with no clear signal to the learner about which was which.
The Renewal Strategist they built into the team — a learning operations specialist with a background in content management — solved the problem not with better production speed but with a different content architecture. Every piece of product training was published with a stated lifespan: a date at which it would automatically move to a review queue rather than remain in the active catalog. The review queue triggered a renewal decision: is this content still accurate, still relevant, still worth the space it occupies? If yes, it was republished with a new lifespan. If no, it was cancelled or queued for retooling.
The mechanism was simple. The cultural shift it required was not. The team had to accept that content they had spent significant effort producing would be deliberately ended on a schedule, regardless of the political relationships attached to it. Subject matter experts whose sections were cancelled had to be thanked rather than apologized to. The stakeholder who had originally requested a training module had to understand that its removal was not a judgment on its original value but a recognition that its current value no longer justified its existence.
Within eighteen months, the active catalog had shrunk by forty percent. Search results became more useful because the signal-to-noise ratio had improved. New hires reported in qualitative feedback that the product training felt current in a way that previous cohorts had not experienced. The Renewal Strategist had not improved the quality of any individual piece of content. She had improved the quality of the portfolio by deciding what shouldn’t be in it.
• • •
WHAT IT LOOKS LIKE DONE BADLY
The most common failure of renewal strategy is treating it as a housekeeping function rather than an editorial one.
Housekeeping removes content that is obviously wrong: out-of-date information, broken links, retired products. It is reactive, triggered by complaints or audits rather than by a proactive evaluation cycle. It catches the most egregious problems and leaves everything else in place. The LMS never gets smaller. It only stops getting more obviously wrong.
Editorial renewal is different. It asks not just whether the content is correct but whether it is earning its place in the portfolio. It is proactive, governed by a regular cycle rather than by complaints. It applies a consistent standard across the entire catalog rather than addressing problems as they surface. And it has a named owner with the authority to make removal decisions without requiring consensus from every stakeholder who touched the content when it was created.
The second failure is the sunk cost fallacy applied to content. The module cost forty thousand dollars to produce. It has been in the LMS for four years. Removing it feels like admitting the forty thousand dollars was wasted. This reasoning is incorrect in every context where it appears and particularly costly in content management. The forty thousand dollars was spent. It is gone regardless of whether the content continues to exist. The decision about whether to keep the content should be made on its current and future value, not its historical cost. The Renewal Strategist needs to be able to make this argument clearly and win it, because they will need to make it regularly.
The third failure is the absence of a cancellation culture. In some organizations, the political cost of removing content is genuinely high. The subject matter expert who delivered it, the executive who sponsored it, the team that built it — all of them have a stake in its continued existence that has nothing to do with its value to learners. The Renewal Strategist navigates this politics without being captured by it. The framework for renewal decisions must be clear enough and consistently applied enough that individual cancellations feel like the application of a principle rather than a personal judgment. When cancellation is a policy rather than a decision, the politics are easier to manage.
• • •
THE AI DIMENSION
AI makes the renewal conversation both more urgent and more tractable.
More urgent because AI-accelerated content production will fill portfolios faster than any previous production model. An organization that produces two hundred pieces of content per year without a renewal function accumulates a maintenance problem. An organization that produces two thousand pieces of content per year without a renewal function accumulates a crisis. The portfolio becomes unnavigable. The learner’s experience of the LMS degrades to the point where search is unreliable and recommendation is meaningless. The Renewal Strategist is the function that prevents this, and in an AI production environment their work becomes continuous rather than periodic.
More tractable because AI can assist with the data analysis that renewal decisions require. Pattern recognition across large content portfolios — identifying content with consistently low voluntary engagement, flagging content whose subject matter has been superseded by more recent material, surfacing content whose completion shape suggests a specific failure point — is exactly the kind of task where AI tools add genuine value. The Renewal Strategist does not need to manually audit every piece of content in a portfolio of thousands. They need to design the analytical framework and make the editorial judgments that the analysis surfaces. AI handles the former. The Renewal Strategist handles the latter.
• • •
The question to ask your organization this week: When did you last cancel a piece of content — not archive it, not retire it quietly, but make a deliberate decision that it no longer deserved to exist in your portfolio and act on that decision?
CHAPTER ELEVEN — SHOWRUNNER ROLE 9 OF 11
Compliance Translator
Making the required watchable — finding the story inside the regulation, the character inside the policy
Every employee in the organization has the same relationship with compliance training.
They know it’s coming. They know approximately what it will say. They know how long it will take. They know that completing it matters in the narrow sense that their manager will be notified, that their record will show green, that the HR system will stop sending reminders. They also know, without having to articulate it, that completing it is entirely different from having learned anything from it.
They are right about this. Most compliance training is not designed to be learned from. It is designed to be completed. The distinction sounds subtle. It isn’t. Content designed for completion optimizes for the least friction between the opening screen and the certificate of completion. Content designed for learning optimizes for something harder to measure: the moment when the person watching understands, in a way they didn’t before, why the thing the regulation is trying to prevent actually matters.
That moment is available in every piece of compliance content ever built. It is almost never designed for.
Here is the opportunity the Compliance Translator exists to find: compliance training arrives with a captive audience, guaranteed. Every assigned module will be opened. Every knowledge check will be attempted. No other content in the L&D portfolio has this advantage. The compliance training has the most guaranteed viewership of anything the department produces, and most organizations spend that viewership on the most forgettable content they make.
The Compliance Translator is the person who refuses to waste it.
• • •
THE ROLE
The Compliance Translator is responsible for converting regulatory and policy requirements into learning experiences that the employee encounters as meaningful rather than mandatory. Not by softening the requirement or obscuring its seriousness — compliance exists for real reasons and those reasons deserve to be communicated honestly. But by finding the human story inside the regulation, the actual consequence behind the policy, the real situation the rule was designed to prevent, and building the content from that foundation rather than from the regulatory language.
The job is translation in the most precise sense of the word. The regulation is written in the language of legal obligation. The employee lives in the language of daily experience. The Compliance Translator moves the content from one language to the other without losing the meaning that makes compliance worth taking seriously in the first place.
This is not a cosmetic task. It is not about making compliance training look more engaging with better production values or more cheerful graphics. It is about a fundamental rethinking of what compliance content is for. Most compliance content is built around the question: what does the employee need to know in order to pass this assessment? The Compliance Translator builds from a different question: what does the employee need to understand in order to make better decisions when the situation the regulation addresses actually arises?
These questions produce different content. The first produces a module that covers the regulation. The second produces a module that prepares the employee for a moment.
Most compliance content is built around the question: what does the employee need to know to pass this assessment? The Compliance Translator builds from a different question: what does the employee need to understand to make better decisions when the situation actually arises?
• • •
FINDING THE STORY
Every regulation has an origin. It exists because something happened — because a specific kind of harm occurred, repeatedly, until the harm became a pattern that warranted a rule. The rule is the abstraction. The story is the thing that generated it.
The Compliance Translator finds the story.
A data privacy regulation exists because personal information was mishandled in ways that caused real harm to real people. A workplace safety requirement exists because a specific kind of accident happened often enough that someone counted the injuries and decided the count was unacceptable. A financial conduct rule exists because specific behaviors produced specific losses, for specific people, that could have been prevented. The regulation is the institutional response to accumulated harm. The harm is the reason the regulation deserves to be taken seriously.
Most compliance content presents the rule without the story. It tells the employee what they must and must not do without telling them why the must and must not were necessary. The employee learns the rule. They do not learn the thing that makes the rule worth following when following it is inconvenient.
Content built from the story produces something different. It begins not with the regulation but with the situation: a real or composite scenario that shows, specifically and concretely, the harm the regulation is designed to prevent. The employee sees the consequence before they see the rule. The rule arrives as the institutional response to a problem they have already understood rather than as an abstract obligation they are required to accept. The sequence changes what the employee carries out of the module.
Finding the story requires research that most L&D teams do not build into their compliance production process. It requires conversations with legal and compliance teams about the history of the regulations they’re implementing — not just what the rules require but why the rules exist, what the documented cases of noncompliance look like, what actually happens when an employee makes the wrong decision in the situation the regulation addresses. This information exists. It is almost never used in the content that is ostensibly about communicating it.
• • •
WHAT IT LOOKS LIKE DONE WELL
A financial services firm redesigning its anti-money-laundering training faced the standard compliance challenge: a regulation dense with technical requirements, a legal team with legitimate concerns about anything that could be construed as simplification, and a frontline employee population whose daily work involved exactly the kind of transactions the regulation was designed to monitor.
The Compliance Translator on this project — a learning designer who had previously written long-form journalism — spent three weeks before any content was built in conversation with the firm’s compliance investigation team. She was not gathering facts. She was gathering cases: real instances, appropriately anonymized, of transactions that had been flagged, investigated, and resolved. What she found were not abstract regulatory violations. They were stories of specific decisions made by specific people in specific pressured moments — a relationship manager who had known a client for fifteen years and didn’t want to ask an uncomfortable question, a junior analyst who had deferred to a senior colleague who turned out to be wrong, a branch that had hit its targets for three consecutive quarters and had started treating unusual transactions as a nuisance rather than a signal.
The training was built from those cases. Not from the regulation. The regulation appeared, accurately and completely, but it appeared as the framework that explained what the people in the cases should have done — as the answer to a problem the learner had already seen, not as a requirement they were being handed. The assessment asked the learner to apply the framework to new cases, not to recall regulatory definitions.
Completion rates were unchanged, because they had always been high — the training was mandatory. What changed was qualitative: the compliance team reported a measurable increase in voluntary escalations from frontline staff in the six months following the training rollout. The employees had not just learned the rule. They had internalized the reason for it.
• • •
WHAT IT LOOKS LIKE DONE BADLY
The archetypal failure of compliance content is the regulation read aloud over a slide.
The regulation appears on screen. A narrator reads it. A graphic illustrates it, usually with a stock photo of a person looking thoughtful in an office setting. A knowledge check confirms that the learner can identify which statement most closely paraphrases the regulation they just heard. The learner passes. The completion registers. Nobody is pretending this is learning. Everybody is pretending it is.
This model persists because compliance content operates under a specific set of pressures that most L&D content does not face. The legal team needs to ensure that the content accurately represents the regulatory requirement — a legitimate concern that often manifests as resistance to any simplification or narrative framing that might introduce ambiguity. The HR team needs to ensure that completion is documented — a legitimate concern that often manifests as emphasis on assessment design over learning design. The business needs the training completed quickly and with minimal disruption — a legitimate concern that often manifests as pressure to keep modules short regardless of whether short serves the learning objective.
These pressures are real. The Compliance Translator does not ignore them. They work within them — finding the story that satisfies legal accuracy, the format that allows for documentation, the length that respects the employee’s time. The constraints do not prevent good compliance content. They require a more skilled designer than most organizations assign to the task.
The second failure is treating all compliance content as equivalent. Not all regulations carry the same stakes for the same employees. The data privacy training that every employee completes annually is a different design challenge from the specific financial conduct training that a small group of traders completes before they can access certain markets. The Compliance Translator applies the Production Economist’s framework here: which compliance content deserves flagship investment because the consequence of noncompliance is severe and the behavior the content is trying to produce is genuinely difficult, and which compliance content can be produced at lean quality because the requirement is primarily documentary rather than behavioral?
The third failure is the compliance training that teaches the rule but not the judgment. Regulations are written for general situations. Real situations are specific. An employee who has learned the rule in the abstract has not necessarily developed the capacity to apply it correctly in the ambiguous, pressured, specific situation they will actually face. The Compliance Translator designs for the ambiguous situation, not the clear one. The clear situations don’t require training. The ambiguous ones do.
The clear situations don’t require training. The ambiguous ones do. The Compliance Translator designs for the moment of genuine uncertainty — when the rule applies but its application is not obvious, and the employee’s judgment is the only thing standing between a good outcome and a bad one.
• • •
THE AI DIMENSION
AI has a specific and underappreciated weakness in compliance content production: it is trained to avoid liability rather than to produce learning.
When asked to produce compliance training, AI tools tend toward the regulatory language they have been trained on — accurate, complete, appropriately hedged, and entirely inert as a learning experience. They produce content that looks like compliance training because they have processed a great deal of compliance training. What they have processed is mostly bad. The default output reflects the default quality of the genre.
The Compliance Translator’s prompting work is particularly important here. Getting useful compliance content from an AI tool requires inputs that are not typically in the production brief: the story behind the regulation, the specific scenarios that represent the ambiguous situations the regulation is designed to address, the voice in which the organization talks about risk and responsibility, the judgment calls that the employee will actually face. Feeding these inputs to the tool moves the output from regulatory recitation to something closer to a learning experience. The Compliance Translator is the person who knows what inputs are needed and how to articulate them.
There is also a legal review consideration that makes AI-generated compliance content structurally different from other content types. Legal teams are, appropriately, cautious about compliance content that has been generated rather than authored. The Compliance Translator navigates this by ensuring that the review process for AI-generated compliance content is as rigorous as the review process for human-authored content — not more rigorous, not less, but equivalent. AI authorship is a production method. The legal accuracy of the output is the standard by which compliance content is evaluated regardless of how it was produced.
• • •
THE COMPLIANCE TRANSLATOR’S RELATIONSHIP WITH LEGAL
The most important non-L&D relationship the Compliance Translator manages is with the legal and compliance teams who own the regulatory requirements the content is built to address.
This relationship is often adversarial by default. Legal teams are trained to see risk in simplification, narrative framing, and anything that could be construed as stating a regulatory requirement less than completely. L&D teams are trained to see risk in complexity, jargon, and content that prioritizes accuracy over comprehension. Both concerns are legitimate. The conflict is structural rather than personal, and the Compliance Translator’s job is to resolve it.
The resolution comes from reframing the legal team’s concern. The question is not whether the content is accurate. The question is whether the content produces the behavior the regulation is designed to require. Accurate content that the employee does not understand, does not remember, and cannot apply in a real situation is not serving the regulatory purpose. It is serving the documentation purpose — which is real but insufficient. The Compliance Translator makes this argument to the legal team not as a creative versus compliance debate but as a risk management one: content that produces behavior change reduces regulatory risk more effectively than content that produces completion records.
Legal teams, presented with this argument by someone who clearly understands the regulatory requirement and is not asking them to compromise accuracy, are often more receptive than L&D teams expect. The adversarial default dissolves when the Compliance Translator can demonstrate that their approach produces more compliance, not less, than the regulatory recitation model. That demonstration requires data. Building it is worth the investment.
• • •
The question to ask your organization this week: Take your most-completed and least-loved compliance module. Do you know the story behind the regulation it covers — the specific harm it was designed to prevent? If not, that’s where the redesign starts.
CHAPTER TWELVE — SHOWRUNNER ROLE 10 OF 11
Distribution Strategist
Where and how content reaches people matters as much as what’s in it — and almost nobody is deciding this on purpose
The module is finished.
It has been through design, production, review, legal approval, and quality assurance. The team is proud of it. It represents the best work they’ve done on a subject that genuinely matters to the organization. The instructional design is sound, the production quality is high, the casting is right, the ending creates the forward momentum that the Cliffhanger Engineer built into it.
It gets published to the LMS.
A launch email goes out to all employees. The subject line reads: “New Learning Available: [Module Title].” The open rate is fourteen percent. Of those who open the email, sixty-one percent click through to the module. Of those who click through, forty-four percent complete it. The content team celebrates a forty-four percent completion rate on a voluntary module, because forty-four percent is better than their historical average.
Nobody asks what happened to the other eighty-six percent who never opened the email. Nobody asks whether the LMS was the right channel for this content in the first place. Nobody asks whether a different distribution strategy might have produced a different result from the same piece of content.
That is the question the Distribution Strategist exists to answer, before the module is built rather than after it ships.
• • •
THE ROLE
The Distribution Strategist is responsible for the decisions about where, when, how, and to whom content is delivered — treating these as creative and strategic choices that are as consequential as any decision made during production.
In the entertainment industry, distribution is a first-order strategic decision. A film released in theaters in December is making a different bet than the same film released on streaming in March. A television series that drops all episodes simultaneously is designed for a different audience behavior than one that releases weekly. A podcast that launches with a six-episode backlog is making a different assumption about listener psychology than one that starts with a single episode. These are not logistical choices. They are creative ones, made by people who understand that the experience of encountering content is shaped as much by when and how it arrives as by what it contains.
L&D treats distribution as logistics. The LMS is where content lives. The launch email is how people find out about it. The assignment is how they are required to engage with it. These defaults are so deeply embedded in how L&D operates that most teams do not experience them as choices. They are infrastructure. The Distribution Strategist reframes them as decisions, and asks whether the default infrastructure is the right answer for this specific piece of content, for this specific audience, at this specific moment.
L&D treats distribution as logistics. The Distribution Strategist reframes it as a creative decision — asking whether the default infrastructure is the right answer for this content, this audience, at this moment.
• • •
THE DISTRIBUTION DECISION FRAMEWORK
The Distribution Strategist works through four decisions for every significant piece of content the department produces. These decisions are made before production begins, not after, because some of them change what gets built.
The first decision is channel. The LMS is one channel. It is not the only one, and it is not always the right one. Content that needs to reach people in the moment of need — a how-to guide that a frontline employee needs while handling a customer complaint, a process checklist that a new hire needs on their first day in a role — is not well served by a channel that requires the learner to navigate away from their work to find it. The moment of need is better served by integrating content into the tools where work happens: the CRM, the communication platform, the workflow system. The Distribution Strategist asks, for every piece of content, whether the LMS is the right channel or the habitual one.
The second decision is timing. Content delivered at the moment of need produces different outcomes than content delivered in advance of need. An employee who completes a negotiation training module six weeks before their first significant negotiation will retain less of it than an employee who completes the same module two days before. The Distribution Strategist designs for the moment when the learning will be applied, not for the moment when it is convenient to deliver. This requires a level of coordination with the business that most L&D departments have not established, but the outcomes justify the investment in building it.
The third decision is push versus pull. Pushed content is delivered to the learner whether they requested it or not: the assignment, the launch email, the mandatory training. Pulled content is sought out by the learner because they need it or want it: the search result, the recommended resource, the reference material a colleague shared. Most L&D content is pushed. Most L&D teams assume that push is necessary because employees will not seek out learning voluntarily. This assumption is wrong often enough to be worth questioning. Content that is genuinely useful at the moment of need gets pulled. Content that arrives as an assignment gets completed and forgotten. The Distribution Strategist asks, for every piece of content, whether it is designed to be pulled and whether the distribution strategy creates the conditions for that.
The fourth decision is the intermediary. Some content reaches its audience most effectively through a human intermediary rather than a digital channel. A manager who introduces a module to their team, explains why it matters, and creates the expectation that they will discuss it afterward produces a different learning outcome than the same module arriving in an employee’s LMS queue. The manager is a distribution channel. So is the team meeting, the onboarding buddy, the peer cohort. The Distribution Strategist considers these human channels alongside digital ones and designs distribution strategies that use both.
• • •
WHAT IT LOOKS LIKE DONE WELL
A global hospitality company redesigning their guest service training faced a specific distribution problem: their frontline employees worked in environments where accessing an LMS during a shift was impractical, and the modules built for desktop consumption were being completed on personal phones during commutes in ways that produced low retention and lower application.
The Distribution Strategist on this project — a learning operations leader with a background in marketing — started from the employee’s actual work context rather than the content team’s production defaults. She spent two weeks shadowing frontline employees across three properties, mapping when they had attention available for learning content, in what format, in what physical context, and triggered by what need.
What she found was a set of micro-moments that the LMS-first model was missing entirely. The fifteen minutes before a shift when employees arrived early and were often waiting for a briefing. The handover period between shifts when specific procedural questions arose. The moments immediately after a difficult guest interaction when an employee wanted to understand what had gone wrong and what they could have done differently. None of these moments were being served by the existing distribution model, because the existing distribution model had been designed around the LMS rather than around the employee.
The redesigned distribution strategy used four channels. Short video content — two to three minutes — delivered via the team communication app for the pre-shift micro-moment. Quick reference cards accessible via QR code posted at service stations for the moment-of-need use case. A manager-facilitated debrief framework for the post-interaction reflection moment. And the LMS, retained for the longer, more complex content that genuinely required sustained attention and was best completed outside of work hours.
The content itself was largely unchanged. What changed was when and how it arrived. Voluntary engagement with the learning content increased by more than the team had achieved in three years of improving production quality. The Distribution Strategist had found the audience where they actually were rather than where the LMS assumed they would be.
• • •
WHAT IT LOOKS LIKE DONE BADLY
The default failure of distribution in L&D is the LMS-first assumption: the belief that the learning management system is where learning happens, and that the job of distribution is to get employees into the LMS.
This assumption was defensible when the LMS was the only scalable channel for content delivery. It is not defensible now. The tools employees use for work have become capable of delivering learning content in ways that are contextually appropriate, frictionless, and integrated with the moment of need. The LMS remains a valuable platform for certain content types. It is not the right channel for all of them, and treating it as the default for everything is a distribution failure even when the content is excellent.
The second failure is the launch email as the distribution strategy. The launch email announces. It does not distribute. It reaches the people who open it, among whom the percentage who will engage with the content is consistently lower than L&D teams expect. It does nothing for the people who don’t open it, which in most organizations is the majority. A distribution strategy built around a launch email is a strategy that accepts, implicitly, that most of the intended audience will not engage. The Distribution Strategist does not accept this. They design for the full intended audience, which requires understanding where that audience actually is and building a strategy that reaches them there.
The third failure is treating all content as having the same distribution requirements. A flagship onboarding program that a new hire will engage with intensively over their first thirty days has different distribution requirements than a reference module that an experienced employee might need once every six months. A compliance module that must be completed by a specific date has different distribution requirements than a development resource that an employee should encounter at the right moment in their career. The Distribution Strategist applies differentiated thinking to every significant piece of content, rather than routing everything through the same channel with the same launch approach.
The fourth failure is designing distribution after content is built. Some distribution decisions require changes to the content itself. A module designed for desktop viewing cannot simply be pushed to a mobile channel without redesign. Content built for a self-directed, uninterrupted viewing experience does not work as a series of micro-moments without restructuring. A learning experience designed for individual completion does not translate to a manager-facilitated team discussion without a different design logic. The Distribution Strategist’s decisions must inform production, not follow it.
The Distribution Strategist’s decisions must inform production, not follow it. Some distribution decisions require changes to the content itself — and those decisions need to be made before a single screen is built.
• • •
THE AI DIMENSION
AI creates a distribution opportunity that most L&D teams have not yet built the infrastructure to use: personalized delivery at scale.
The promise of personalized learning has been made for decades and delivered rarely, because true personalization — content that adapts to the specific learner’s context, history, and current need — required levels of data analysis and content variation that were prohibitively expensive. AI reduces those barriers significantly. The Distribution Strategist working with AI tools can design distribution logic that routes different content to different learners based on their role, their progress through a curriculum, their demonstrated knowledge gaps, and their moment in the employee lifecycle, in ways that would have required a team of analysts to execute manually.
The risk of AI-driven personalization is the filter bubble: a learner who only encounters content that confirms what they already know and skips the content that challenges them. The Distribution Strategist designs personalization logic that includes productive challenge alongside relevant reinforcement — that surfaces content the learner needs as well as content they want. This requires a model of learning progression that the AI serves rather than replaces. The Distribution Strategist holds that model. The AI executes it.
There is also an AI-enabled distribution format that deserves specific attention: the conversational interface. An employee who can ask a learning system a question and receive a contextually appropriate answer — drawn from the organization’s learning content, calibrated to their role and experience level, delivered at the moment of need — is experiencing a distribution model that the LMS-first paradigm was never designed to support. The Distribution Strategist’s framework extends to these interfaces. The question of how content reaches the learner now includes the question of whether content can reach them in the form of a conversation, and whether the organization’s learning content has been structured and tagged in ways that make that possible.
• • •
DISTRIBUTION AS EDITORIAL
The deepest insight of the Distribution Strategist role is that distribution is not separate from editorial. The decision about where and how content reaches the learner is a decision about what the content means.
A piece of content delivered to an employee by their manager, in the context of a team conversation about a specific challenge they are facing, means something different from the same content delivered as an anonymous assignment in an LMS queue. The first says: your organization has thought about your specific situation and found something relevant. The second says: everyone in your role has been assigned this. The content is identical. The meaning is different. The behavior change potential is different.
The Distribution Strategist understands this and designs distribution strategies that are not just logistical plans but editorial ones — that make decisions about meaning, context, and signal alongside decisions about channel, timing, and format. The question is not only how to get the content to the learner. It is what the learner should understand about why this content is coming to them, now, in this form. That understanding is part of the learning experience. The Distribution Strategist designs for it.
• • •
The question to ask your organization this week: For your most important piece of content currently in the LMS: is the LMS the right channel, or the default one — and what would the distribution strategy look like if you designed it for the employee’s actual work context rather than for the system’s convenience?
CHAPTER THIRTEEN — SHOWRUNNER ROLE 11 OF 11
Cultural Continuity Keeper
As organizations grow and AI generates more content, someone has to ask the question no prompt can answer: does this sound like us?
Culture is not a document.
Every organization that has tried to write down its culture — in a values statement, a culture deck, a set of behaviors printed on a card and handed to new hires — has discovered that the document is not the culture. The document is a description of the culture, at its best. At its worst, it is an aspiration wearing the mask of a description: what the organization wishes it were, stated with the confidence of what it actually is.
The culture lives somewhere else. It lives in the specific stories that get told when a new employee asks what it’s like to work here. In the decisions that get made when two values are in tension and one has to give way. In the behavior that is tolerated versus the behavior that is called out. In the way the organization talks about its customers, its competitors, its own failures. In a thousand small signals that accumulate, over time, into something a person can feel before they can articulate it.
Learning content is one of those signals. Every piece of content your L&D department produces is a communication about what this organization is — what it values, how it thinks, what it considers important enough to teach. The content that represents the culture most honestly is not the culture training. It is everything else: the way the onboarding program talks about customers, the way the compliance training frames its relationship with employees, the way the leadership curriculum defines what good leadership looks like in this specific organization rather than in organizations in general.
The Cultural Continuity Keeper is the person who ensures that the accumulated signal of all this content is coherent — that it tells one story about what this organization is, rather than the several different stories that emerge when no one is watching the signal.
• • •
THE ROLE
The Cultural Continuity Keeper is responsible for ensuring that the organization’s learning content reflects its actual culture accurately, consistently, and with enough specificity to be genuinely distinguishable from generic corporate content.
This role overlaps with the Brand Continuity Director in ways worth naming directly. Both roles are concerned with consistency. The Brand Continuity Director is concerned with the voice and identity of the L&D department as a content producer — whether the content sounds like it comes from one place. The Cultural Continuity Keeper is concerned with whether the content accurately represents the organization the L&D department serves — whether it sounds like this company rather than any company.
These are related but distinct concerns. A department can have a consistent voice that says nothing specifically true about its organization. It can produce content that is tonally coherent and culturally generic — that could have been made for any company in the same industry. The Cultural Continuity Keeper catches this failure. Their standard is not just consistency but specificity: does this content teach what it’s like to work here, or does it teach what it’s like to work somewhere?
The role becomes more important as organizations grow, because culture becomes harder to transmit at scale. A fifty-person company transmits culture primarily through direct human contact: the founder’s stories, the team’s shared experiences, the daily proximity that makes cultural expectations legible without anyone having to articulate them. A five-thousand-person company cannot rely on proximity. It relies on systems, and learning content is one of the most important systems for cultural transmission at scale. The Cultural Continuity Keeper is the person who ensures that system is working.
A fifty-person company transmits culture through proximity. A five-thousand-person company transmits it through systems. Learning content is one of the most important of those systems — and the Cultural Continuity Keeper ensures it is working.
• • •
WHAT CULTURE ACTUALLY IS IN CONTENT
Culture shows up in content in places that are easy to miss and difficult to manufacture.
It shows up in the examples. Generic content uses generic examples: a character named Marcus who works in a vague corporate setting facing a recognizable but unspecific challenge. Content that carries cultural specificity uses examples drawn from the actual situations this organization faces, the actual tensions this industry navigates, the actual decisions that people in this role make in this company. The specificity of examples is one of the most reliable markers of whether content is genuinely organizational or merely professional.
It shows up in the tensions acknowledged. Every organization has productive tensions — values that are both real and occasionally in conflict. Speed and quality. Individual performance and team collaboration. Short-term results and long-term relationship. Generic content avoids these tensions because acknowledging them requires taking a position about how this organization navigates them. Cultural content names them and shows, specifically, how this organization resolves them when they arise. This specificity is what makes the content useful to someone actually working here, and it is what no generic content vendor can provide.
It shows up in what the content refuses to say. Every culture has things it doesn’t do — ways it doesn’t talk about customers, behaviors it doesn’t model as acceptable, framings it considers inconsistent with its values. Generic content has none of these refusals because generic content has no values of its own. The Cultural Continuity Keeper knows what the organization refuses and ensures that the content reflects those refusals — that the scenarios don’t model behavior the culture would not endorse, that the framing doesn’t imply values the organization doesn’t hold.
It shows up in the heroes. Every piece of learning content implicitly models the person it is trying to produce: the employee who does the right thing, the manager who handles the difficult situation well, the leader who makes the call that reflects the organization’s values under pressure. The Cultural Continuity Keeper asks: is the hero of this content the kind of person this organization actually celebrates? Is the behavior being modeled the behavior that gets recognized, promoted, and talked about with pride? If not, the content is teaching a version of the culture that does not exist, which is more damaging than teaching nothing at all.
• • •
WHAT IT LOOKS LIKE DONE WELL
A technology company that had grown from three hundred employees to three thousand in four years faced the cultural transmission problem in its most acute form. The culture that had made the company successful — characterized by a specific kind of intellectual honesty, a willingness to surface bad news quickly, and a deeply held belief that the best idea wins regardless of who had it — was not surviving the growth. New employees were joining a company that talked about these values without embodying them, because the systems for transmitting the values had not scaled with the headcount.
The Cultural Continuity Keeper on this project was not an L&D professional. She was the company’s first Head of Culture, a role that had been created specifically because the CEO had recognized that culture transmission was a strategic problem. Her mandate was broad: understand what the culture actually was at its best, identify where it was failing under growth pressure, and build the systems that would transmit it at scale.
Her first project in partnership with the L&D team was an audit of every piece of learning content the company had produced in the previous two years. She read it not as a learning professional but as a culture anthropologist: what did this content say about what the organization valued? What behavior did it model as exemplary? What tensions did it acknowledge or avoid? What was the implicit portrait of the ideal employee that emerged from the accumulated content?
The portrait she found was not the company she was trying to preserve. The content modeled a version of professional behavior that was generic, conflict-averse, and hierarchically deferential — the opposite of the culture that had made the company successful. New employees learning from this content were being taught to be different people than the ones the company needed them to be.
The rebuild took eighteen months. Every major piece of content was rewritten from examples drawn from the company’s actual history: decisions that had been made the hard way, moments where the culture had been tested and had held, stories told with the specificity of people who had been in the room. The heroes were real — named or composite but unmistakably from this company rather than from the generic corporate landscape. The tensions were named and the company’s specific way of navigating them was taught explicitly rather than implied.
The impact was not measurable in a single metric. It was visible in the qualitative experience of new employees, who began reporting in onboarding surveys that the content felt true — that it described a company they recognized from their daily experience rather than an aspirational company they had never encountered. The Cultural Continuity Keeper had not improved the production quality of the content. She had improved its accuracy.
• • •
WHAT IT LOOKS LIKE DONE BADLY
The most common failure of cultural continuity in learning content is the values module.
The values module is the piece of content, usually in onboarding, that teaches the organization’s stated values. It names them, defines them, illustrates them with scenarios, and assesses whether the new employee can identify which value is being demonstrated in a given situation. It is almost always the most generic piece of content in the portfolio, because values stated abstractly are indistinguishable from values stated abstractly by any other organization. Integrity, innovation, collaboration, customer focus — these appear on the values lists of thousands of companies. A values module built from these words without the stories, tensions, and specific behaviors that make the values real in this organization teaches nothing specific about this organization.
The failure is not that values modules exist. It is that they are treated as the primary vehicle for cultural transmission rather than one element in a larger system. Culture is not taught in a module. It is transmitted through the accumulated experience of encountering an organization that behaves consistently with what it says it values. The learning content is part of that experience. Every piece of content that is inconsistent with the stated culture — that models behavior the values would not endorse, that uses language the culture would not use, that resolves tensions in ways the organization would not recognize — is an active counter-transmission. It teaches the new employee that the values are aspirational rather than operational.
The second failure is cultural content that has not been updated to reflect cultural evolution. Organizations change. The culture that was accurate two years ago may be partially or significantly different from the culture that exists today — because of growth, because of leadership change, because of strategic pivots that have shifted what the organization values and how it operates. Content that was culturally accurate when it was built becomes culturally misleading when the organization has moved on. The Cultural Continuity Keeper maintains a connection between the content portfolio and the living culture, flagging content that has become inaccurate and triggering the Renewal Strategist’s evaluation process.
The third failure is confusing aspiration with description. Organizations often build learning content that teaches the culture they want rather than the culture they have. This is understandable — content is a tool for shaping behavior, and there is a legitimate use of content to model aspirational behavior and move the organization toward it. But content that describes an aspirational culture as though it were the current reality loses the trust of employees who know the difference. The Cultural Continuity Keeper distinguishes between content that is descriptive — this is how we work — and content that is aspirational — this is how we are working to work — and ensures both are labeled accurately.
Content that describes an aspirational culture as though it were the current reality loses the trust of employees who know the difference. The Cultural Continuity Keeper ensures that aspiration is labeled as aspiration — not presented as description.
• • •
THE AI DIMENSION
The Cultural Continuity Keeper role is the clearest illustration of what AI cannot do in learning content production, and why the human roles in this book are not at risk of automation.
AI has no access to the specific culture of your organization. It has processed a great deal of content about organizational culture in general, about values and leadership and employee experience, and it can produce content that sounds culturally aware without being culturally specific. This is the central limitation of AI in L&D, applied to its most consequential domain. The content AI produces is accurate about corporate culture in general. It is not accurate about your corporate culture in particular, because your corporate culture in particular is not in its training data.
The Cultural Continuity Keeper is the person who bridges this gap. They provide the specific inputs — the stories, the tensions, the heroes, the refusals — that make AI-generated content culturally specific rather than generically professional. They review AI output against the standard of cultural accuracy, not just factual accuracy. They catch the content that is technically correct and culturally wrong — that models behavior the organization would not endorse, that resolves tensions in ways the organization would not recognize, that describes a company the employees do not work for.
This review function is not a light edit. It is a deep read by someone who knows the culture well enough to notice when the content has drifted from it. The Cultural Continuity Keeper’s knowledge of the organization is the primary input into this review. No tool can substitute for it. The Showrunner role that is most clearly irreplaceable by AI is this one.
• • •
CULTURE AS COMPETITIVE ADVANTAGE
The Cultural Continuity Keeper’s work has a dimension that extends beyond the learning function into the organization’s competitive position.
Organizations with strong, specific, accurately transmitted cultures have an advantage in talent retention that is difficult to replicate. Employees who feel that the organization they work for is genuinely distinctive — that it operates in ways that are specific to it rather than generic to its industry — are more likely to stay, more likely to refer others, and more likely to perform at the level the culture is designed to produce. This advantage compounds over time. A culture that is consistently transmitted becomes more itself with every employee who internalizes it. A culture that is poorly transmitted dilutes with every growth cycle.
Learning content is not the only transmission mechanism, but it is one of the most scalable. The Cultural Continuity Keeper’s work is, in this sense, a direct contribution to the organization’s talent strategy. The CLO who can make this case — who can connect the quality of cultural transmission in learning content to measurable outcomes in retention, engagement, and cultural coherence — has a much stronger position in the budget conversation than the CLO who is arguing for learning quality in isolation.
This is the final argument for the Showrunner role, and it is the most important one. The eleven roles described in Part Two are not a creative framework for making better content. They are a strategic framework for making learning content that serves the organization’s most important goals — capability development, behavioral change, cultural transmission, and talent retention. The Showrunner is the person who holds all of this together. The Cultural Continuity Keeper is the role that makes the strategic argument most visible, because culture is the thing the organization cares about that no content vendor can provide and no AI can generate.
Someone has to hold it. That is the Showrunner’s job.
• • •
The question to ask your organization this week: Take three recent pieces of content and read them as a new employee encountering your organization for the first time. What do they teach about what it’s like to work here — and is that true?
PART THREE: BUILDING THE ROLE
CHAPTER FOURTEEN
The Bible
How to build the foundational document that makes the Showrunner’s decisions scalable — and survives them leaving
The Showrunner’s most important deliverable is not content. It is the document that makes great content possible without requiring the Showrunner to personally touch every piece of it.
This is the bible.
In television, the show bible precedes everything. Before a pilot is shot, before a writer’s room convenes, before a single scene is blocked, the Showrunner writes the bible. It establishes the world: who lives in it, what the rules are, what the show is about, what it will never do, what it sounds like, what it is building toward. Every subsequent decision — every casting choice, every script, every edit — is made against the bible. When a decision is contested, the bible arbitrates it. When a new writer joins the room, the bible orients them. When the Showrunner is unavailable, the bible speaks for them.
The bible is what allows a show to be made by dozens of people over years without losing coherence. It is the institutional memory of the creative vision, written down specifically enough to be actionable and broadly enough to accommodate the variation that good creative work requires.
Your L&D department needs one. Not a style guide. Not a brand manual. Not a quality checklist. A bible: the document that captures the creative and cultural commitments that govern every piece of content your department produces, written by the Showrunner and owned by the department in a way that survives personnel change, organizational growth, and the relentless pressure toward the generic that every content operation faces.
• • •
WHAT THE BIBLE IS NOT
Before describing what the bible contains, it’s worth being specific about what it is not, because the most common failure of bible-building is producing a document that looks like a bible and functions like a filing cabinet.
The bible is not a brand style guide. Style guides govern visual identity: fonts, colors, logo usage, template layouts. These matter and should exist. They are not the bible. The bible governs the voice, the values, and the creative commitments that live beneath the visual layer. Two pieces of content can use identical fonts and colors while sending completely different messages about what the organization believes. The style guide prevents the first kind of inconsistency. The bible prevents the second.
The bible is not a quality rubric. Quality rubrics are checklists: does the module have a learning objective? Is the assessment aligned to the objective? Does the production meet accessibility standards? These questions matter and should be answered. They are not the bible’s questions. The bible’s questions are not checkable. They require judgment: does this sound like us? Does this respect the learner? Does this serve the argument the curriculum is making? The bible creates the standard against which judgment is applied. The rubric verifies that the minimum has been met.
The bible is not a strategy document. It does not describe the learning function’s goals, priorities, or theory of change. That document exists separately and informs the bible, but the bible is not the place for strategic planning. It is the place for creative and cultural commitments that hold regardless of what the strategy is in any given year. The bible should be as true when the organization is growing aggressively as when it is contracting carefully. It governs the how, not the what.
The bible is not long. The most effective ones are eight to twelve pages. Dense with specificity, short on explanation. Written to be read in twenty minutes and referenced in twenty seconds. If the bible cannot be used in a review meeting to settle a contested decision, it is too abstract. If it takes more than a page to establish the voice, it has not established the voice.
The bible should be readable in twenty minutes and referenceable in twenty seconds. If it cannot be used in a review meeting to settle a contested decision, it is too abstract. If it takes more than a page to establish the voice, it has not established the voice.
• • •
WHAT THE BIBLE CONTAINS
A complete L&D bible has seven components. Each one addresses a question that is currently being answered by default in most organizations — differently on every project, by whoever happens to be making the decision. The bible answers these questions once, on purpose, so that every subsequent decision is made against a consistent standard.
1. The Learner Promise
This is the department’s commitment to its audience, stated in the second person and specific enough to be testable. Not “we respect our learners” — that is an aspiration too vague to govern a decision. Something like: “We never explain something you already know. We tell you why something matters before we tell you what it is. We trust you to apply information without showing you what applying it looks like. We end every experience with something unresolved, because we believe learning continues after the content ends.” These commitments are specific enough that a designer can check a piece of content against them and identify where it violates them. That specificity is what makes the bible useful rather than decorative.
2. The Voice
Voice is the hardest component to write and the most important to get right, because voice is what makes content sound like it comes from a specific place rather than from nowhere in particular. The most effective way to establish voice in a bible is through contrast: write the same sentence twice, once in the organization’s voice and once in the generic corporate voice, and let the difference do the work. “This training will equip you with the tools and frameworks necessary to navigate complex interpersonal dynamics” versus “This is about the conversation you’ve been avoiding. We’re going to help you have it.” The contrast is more instructive than any description of the voice could be. Three to five contrast pairs, chosen to represent the range of content the department produces, establish the voice more precisely than a page of adjectives.
3. The Learner
Who is the person this content is made for? Not a demographic description but a relational one — a statement of what the department assumes about the learner’s intelligence, experience, and context. “We write for someone who is competent at their job and skeptical of training that doesn’t respect that competence. They have seen content that wasted their time. They will know within ninety seconds whether this content is different. We earn their attention; we do not assume it.” This description governs a thousand small decisions: how much context to provide before making a point, whether to explain a term or trust the learner knows it, how to frame a scenario so it feels recognizable rather than constructed.
4. The Refusals
The negative space of the bible is as defining as the positive. Name the things this department’s content will never do. The condescending scenario that demonstrates the obvious. The motivational close that tells the learner they’re equipped without having equipped them. The wall-of-text slide that treats the learner’s time as infinite. The compliance-first framing that positions employees as risks to be managed rather than professionals to be trusted. The refusals should be stated without qualification and referenced without apology. When a piece of content violates a refusal, the bible is clear: this is not what we do.
5. The Cultural Commitments
These are the specific things that are true about this organization’s culture that should be visible in its learning content. Not the stated values — those belong in the strategy document. The operational truths: how this organization resolves the tension between speed and quality, what it means here to treat customers well, what good leadership looks like in this specific context rather than in general. These commitments should be drawn from the organization’s actual history — from the stories that get told, the decisions that get celebrated, the behaviors that get promoted. They cannot be invented. They can only be found.
6. The Production Principles
These are the creative standards that govern production decisions: the minimum quality level below which content will not ship, the circumstances in which production values should be elevated beyond the minimum, the relationship between format and content type, the casting principles that govern who delivers what to whom. The production principles are where the Production Economist’s framework lives in the bible — the rules that differentiate flagship from standard from lean production, stated clearly enough that any designer can apply them without needing to escalate every budget decision to the Showrunner.
7. The Revision Protocol
The bible is a living document. It requires a named owner — the Showrunner — and a defined set of triggers for revision. The triggers are: significant organizational change that affects what the culture is or what the learner’s context looks like; the emergence of new content formats or distribution channels that the current bible does not address; and an annual review regardless of whether either of the above has occurred. The revision protocol should specify who participates in the review, how disagreements are resolved, and how the revised bible is communicated to the team. Without this protocol, the bible ages out of relevance silently — becoming a historical document that people reference less and less until they stop referencing it entirely.
• • •
BUILDING THE BIBLE: THE PROCESS
The bible is not built in a workshop. It is not the output of a facilitated session with the L&D team and selected stakeholders. Those sessions produce documents that reflect the group’s preferences and the facilitator’s framing. The bible reflects the Showrunner’s creative vision for the department, informed by those conversations but authored by a single voice.
The Showrunner builds the bible in three phases.
The first phase is listening. Before writing a word of the bible, the Showrunner spends time with the organization’s content — reading, watching, experiencing the accumulated body of work the department has produced. They are not evaluating quality. They are listening for signal: what does this content say about what this organization believes? What patterns are visible? What inconsistencies? What is being said implicitly that nobody has said explicitly? They are also listening to the people who make the content and the people who consume it: the designers, the subject matter experts, the learners. What do they find missing? What do they find inconsistent? What do they wish the content was that it currently isn’t?
The second phase is drafting. The Showrunner writes a first draft of the bible alone. Not a collaborative document, not a template filled in by committee, but a document written in a single voice that takes a clear position on each of the seven components. The draft is specific, opinionated, and probably wrong in some places. It is not designed to be perfect. It is designed to be concrete enough to react to — to surface the disagreements and refinements that will make the final document more accurate than any first draft could be.
The third phase is testing. The draft goes to a small group for reaction: two or three senior designers who will use it most, one or two stakeholders who represent the organization’s cultural voice, and the CLO whose authority will ultimately back the document. The testing is not about consensus. It is about accuracy: does this describe the content we want to make? Does it reflect the organization we actually serve? Where is it wrong in ways that matter? The Showrunner takes this feedback, revises where the feedback reveals a genuine error or gap, holds firm where the feedback reflects a preference the bible is explicitly designed to override. Then they finalize and publish.
The bible is not built by committee. It is authored by a single voice, tested for accuracy, revised where it is genuinely wrong, and published with the authority of the Showrunner behind it. Consensus is not the goal. Clarity is.
• • •
THE BIBLE AS ONBOARDING TOOL
The bible’s most underappreciated function is what it does for new members of the L&D team.
Every time a designer joins the department, they bring their previous experience of what L&D content looks like and how it gets made. That experience is valuable and partial. It reflects the standards and culture of wherever they came from, not the standards and culture of where they are now. Without the bible, the new designer learns the department’s standards gradually, through feedback and revision, over months. With the bible, they learn them in twenty minutes.
This is not a small efficiency gain. It changes the quality of the work that new designers produce from their first project rather than their fifth. It reduces the volume of revision cycles required to bring a new team member up to the department’s standard. And it sends a message to the new designer about what kind of department they have joined: one that has thought carefully enough about its work to write down what it believes, and that takes those beliefs seriously enough to use them in daily decisions.
The bible is also the document that makes the Showrunner role survivable as a long-term organizational function. Showrunners leave. Organizations that have built the Showrunner function around a single person’s presence rather than a documented standard lose the function when they lose the person. The bible is the deposit of the Showrunner’s creative vision into the organization’s institutional memory. It is what allows the next Showrunner to inherit a standard rather than rebuild one from scratch.
Write the bible as though you will not be there to explain it. Write it specifically enough that someone who has never met you can use it to make the decisions you would make. That specificity is the test of whether it is a bible or a placeholder. And it is, ultimately, the difference between a Showrunner who has built something lasting and one who has built something personal.
• • •
A NOTE ON AUTHORITY
The bible only works if the organization treats it as authoritative. A document that designers read and disregard, that stakeholders override, that sits in a shared drive and appears in searches but governs no actual decisions, is not a bible. It is a filing artifact.
Making the bible authoritative requires the Showrunner to use it visibly and consistently. Every time a decision is contested, the Showrunner references the bible. Every time a piece of content fails a standard, the Showrunner names which standard it violated and where that standard lives in the document. Every time a new team member produces work that is inconsistent with the bible, the correction is framed as a matter of the department’s published standard rather than the Showrunner’s personal preference.
This consistency is what builds the bible’s authority over time. The first few times the Showrunner references it, the response may be skepticism: this is a document, not a rule. Over months of consistent application, the document becomes the rule — not because anyone declared it authoritative but because it is consistently treated as such by the person with the creative authority to make it stick.
The CLO’s role in this is to publicly back the bible in the moments that matter. When a senior stakeholder pushes back on a creative decision and the Showrunner references the bible, the CLO’s support of that reference is what determines whether the bible has organizational authority or departmental authority. Departmental authority is useful. Organizational authority is transformative. The CLO who builds a Showrunner function and backs its bible is building something that will outlast both of them.
CHAPTER FIFTEEN
The Hire
What a Showrunner actually looks like inside a corporate L&D department, where they come from, and how to make the case for them
The wrong way to hire a Showrunner is to post a job description for a Senior Instructional Designer and hope.
The second wrong way is to promote the most talented instructional designer on your team, give them a new title, and expect the role to take shape around their existing skills. Instructional designers are essential. The Showrunner role is different from instructional design in ways that matter, and promoting into it without being clear about those differences sets up a talented person to struggle in a role they were not hired to do.
The third wrong way is to look externally for someone with a resume that lists “Showrunner” under experience. That person does not exist yet in L&D. You are building something new. You are looking for someone whose experience in another field has produced the capabilities the role requires, and who is ready to apply those capabilities in a context they have not worked in before.
This chapter is about finding that person.
• • •
WHAT YOU ARE ACTUALLY LOOKING FOR
The Showrunner is not a role that requires deep expertise in instructional design methodology. It requires something different and rarer: the ability to hold a creative vision across a large body of work, to make editorial judgments that serve an audience rather than a stakeholder, and to exercise authority over creative decisions in an organizational environment that is not naturally designed to accommodate that kind of authority.
These capabilities come from specific backgrounds. They are not evenly distributed across the talent pool that typically applies for L&D leadership roles. Knowing where they come from is the first step in finding them.
Journalism produces Showrunners because journalism trains people to hold a reader’s experience at the center of every decision. A journalist who has spent years in an editorial environment — writing, editing, pitching, killing stories, maintaining a publication’s voice across dozens of contributors — has developed most of the capabilities the Showrunner role requires. They know how to edit without rewriting. They know how to maintain a voice across contributors who have different styles. They know how to kill something that is technically correct but doesn’t serve the reader. They are often shocked by how rarely these standards are applied in corporate learning content. That shock is productive.
Film and television production produces Showrunners because it trains people to think in systems: how does this scene serve this episode, how does this episode serve this season, how does this season serve the show’s larger argument? A producer or writer who has worked in long-form television or documentary has developed the arc-thinking that the Audience Architect and Cliffhanger Engineer roles require. They have also developed a practical understanding of production economics — where to spend, where to save, how to make creative decisions that are financially responsible without being creatively compromised.
Brand and creative direction produces Showrunners because it trains people to hold an identity across a large and varied body of work. A creative director who has maintained a brand’s voice across years of campaigns, channels, and team turnover has developed the Brand Continuity Director and Cultural Continuity Keeper capabilities in their most practical form. They understand that consistency is not sameness, that a strong identity accommodates variation without losing coherence, and that the most important creative decisions are often the ones that say no.
Publishing — books, magazines, newsletters — produces Showrunners because it trains people in the editorial relationship between content and audience at a level of sophistication that most other fields don’t reach. An editor who has developed authors, maintained a publication’s voice across contributors, and made the call on what gets published and what doesn’t has the Taste Arbiter and Writers Room Facilitator capabilities in their most developed form.
What these backgrounds share is not a specific skill but a specific orientation: they have all been trained to make content with a clearly defined audience in mind and to hold a standard for whether the content serves that audience. That orientation, applied to L&D, is the Showrunner.
The Showrunner is not someone who has worked in L&D for twenty years. They are someone who has spent their career making content for audiences — and is ready to bring that discipline to the most underleveraged content operation in most organizations.
• • •
THE INTERVIEW
The interview for a Showrunner role should not look like a standard L&D leadership interview. The standard interview tests familiarity with learning theory, experience with LMS platforms, knowledge of instructional design methodology, and ability to manage stakeholder relationships. These are useful things to know about a candidate. They are not the things that will tell you whether this person can run the show.
The interview questions that matter are about judgment, taste, and creative authority.
Ask them to describe a piece of content they’ve encountered — in any medium, not necessarily L&D — that they consider genuinely excellent, and explain specifically what makes it excellent. Not what it covered or how it was produced. What it did to the person experiencing it, and how the creative decisions that produced that effect were made. The candidate who can answer this question specifically and with conviction has developed aesthetic judgment. The candidate who gives a vague, hedging answer about “engagement” and “best practices” has not.
Ask them to describe a piece of content they’ve been involved in making that they consider a failure, and explain specifically why it failed. Not a production failure — a creative one. A piece of content that was competently made and did not work, and why. The candidate who can answer this question honestly and analytically has the self-awareness that the Taste Arbiter role requires. The candidate who cannot identify a failure, or who attributes failures exclusively to external factors, does not.
Ask them to review a piece of your organization’s existing learning content before the interview and give you their honest assessment of it. Not a diplomatic assessment. A specific, critical one: what works, what doesn’t, what they would change and why. The candidate who gives you a useful, specific critique of your own content has demonstrated the core Showrunner capability. The candidate who hedges, compliments, or speaks only in generalities has shown you they will not have the creative authority the role requires.
Ask them what they would refuse to make. Every strong creative professional has things they won’t do — content types or approaches that they consider beneath the standard they hold. The candidate who can answer this question clearly and without apology has a creative identity. The candidate who says they’re flexible about anything is telling you they have no standard to hold.
• • •
THE INTERNAL CANDIDATE
Before looking outside, look inside. The Showrunner you need may already be in your department.
They are not necessarily the most senior person. They are the person who, when you watch them in a review meeting, is asking the questions nobody else is asking. Not “does this meet the learning objective?” but “would I want to watch this?” Not “did we cover everything?” but “what does the learner leave wanting?” Not “does this satisfy the stakeholder?” but “does this serve the audience?”
They are the person who rewrites scripts they didn’t write because they can’t help themselves. Who raises the same kinds of concerns in review meetings consistently enough that the team knows what they’re going to say before they say it. Who has opinions about the work that are specific and consistent and occasionally inconvenient. Who the team goes to informally when they want to know if something is good, not because of their title but because their judgment is trusted.
If this person exists in your department, you have a Showrunner who has been doing the job without the title, the authority, or the organizational support the role requires. Formalizing their role is not a promotion in the conventional sense. It is the act of recognizing what they are already doing and giving it the structure to scale.
The conversation when you formalize the role matters. Be direct about what you are asking them to do: not more of what they’ve been doing informally, but the formal exercise of creative authority that will require them to hold positions in conversations with stakeholders who outrank them. This is different from being the person with good taste in the design review. It is being the person whose creative judgment the organization has decided to trust above individual stakeholder preferences. Not everyone who has the taste for this role has the appetite for the authority it requires. The conversation will tell you which kind of person you have.
• • •
MAKING THE ORGANIZATIONAL CASE
The most difficult part of building the Showrunner function is not finding the person. It is making the case for the role to the people who control the budget and the org chart.
The case has two versions. The first is the creative case: the Showrunner role produces better content, more consistently, because it provides creative authority that distributed decision-making cannot. This case is true and insufficient. “Better content” is not a compelling argument in most budget conversations. It is a means to an end, and the end needs to be named.
The second version is the strategic case, and it is the one that wins the argument. The Showrunner function produces three organizational outcomes that the business cares about independently of whether they care about learning quality.
The first is talent retention. Organizations that transmit their culture effectively through learning content retain employees at higher rates than organizations that don’t, because employees who feel the organization they work for is genuinely distinctive are more likely to stay. The Showrunner is the person who makes cultural transmission in learning content intentional rather than accidental. The CLO who can connect the Showrunner role to a reduction in first-year attrition has made a strategic case that the CFO can evaluate.
The second is productivity from learning. The specific behavioral outcomes that learning content is designed to produce — faster ramp time for new hires, better manager effectiveness, higher sales conversion, lower error rates in regulated processes — are more reliably achieved when the content is designed with creative authority at its center. The Production Economist’s framework makes this measurable: flagship content designed against a specific behavioral target, measured against that target after deployment, produces an ROI number that justifies the Showrunner role’s cost. Build this case with one flagship project before asking for the full investment in the role.
The third is AI leverage. This argument is new and increasingly powerful. Organizations that deploy AI content tools without building the Showrunner function will produce volume without quality and accumulate a portfolio that degrades employee trust over time. The Showrunner is the investment that makes AI tools in L&D produce returns rather than liabilities. Framed as AI governance and quality assurance, the Showrunner role is an easier argument to make to a leadership team that is actively thinking about responsible AI deployment than it is as a purely creative investment.
Framed as AI governance, talent retention, and learning ROI, the Showrunner role is a strategic investment. The CLO who makes this case is asking for something the business already wants. They are offering a person to deliver it.
• • •
THE BUDGET CONVERSATION
The Showrunner role costs money. The conversation about that cost needs to happen directly, with honest numbers, before the organizational case is made — because the case needs to be calibrated to the investment it is justifying.
A Showrunner hired from journalism, television, or brand at a meaningful level of seniority will command a salary in the range of $130,000 to $180,000 in most markets, more in competitive talent markets or at the senior end of the experience range. This is not a junior hire. It is a senior creative leader, and the compensation should reflect that. Underpaying for this role produces candidates who have not yet developed the capabilities the role requires, which undermines the function before it has started.
The cost should be presented against the budget it is replacing or redirecting, not as an addition to a fixed envelope. In most L&D departments, the distributed cost of poor creative decision-making — revision cycles, content that gets remade because it didn’t work, vendor relationships that produce generic content at premium prices — is higher than the cost of the Showrunner role. A department that spends $50,000 per year on content revisions that a Showrunner would have prevented in the design process has already budgeted the role. It is paying for it in the wrong way.
The production slate framework from Chapter Nine is the tool for this conversation. Present the leadership team with the current portfolio allocation: how many projects, at what budget levels, producing what outcomes. Then present the alternative: fewer projects, concentrated at higher investment levels, with a Showrunner providing the creative authority that makes those investments productive. The comparison is not between having a Showrunner and not having one. It is between a content operation that produces measured outcomes and one that produces volume.
If the budget conversation fails the first time, do not abandon the case. Build it incrementally. Propose a twelve-month pilot: fund the Showrunner role for one year, apply the production slate framework to the two or three most important projects in the portfolio, measure the outcomes against the targets agreed in advance, and bring the results to the next budget conversation. One year of demonstrated outcomes is more persuasive than any argument, and the risk of the pilot is limited to the cost of one senior hire for one year. Most organizations will accept that risk for a strategic function they have been persuaded to care about.
• • •
THE TITLE QUESTION
What do you call this person?
“Showrunner” is not a corporate title. It is a concept. The title the role carries in the org chart will depend on the organization’s conventions, the seniority of the hire, and the scope of the role as it is initially defined.
Some organizations will use “Head of Learning Experience” or “Director of Learning Design,” which are conventional enough to clear HR’s job architecture without requiring an explanation. These titles work if the role description makes the creative authority explicit. A “Director of Learning Design” who has the authority to hold creative standards across the portfolio, to decline projects on strategic grounds, and to maintain the bible as the department’s governing document is a Showrunner regardless of what the title says.
Some organizations will use “Chief Learning Experience Officer” or “VP of Content” at the most senior levels, particularly in organizations where the L&D function has genuine strategic standing. These titles carry organizational authority that lower titles don’t, and for the Showrunner function to operate at full effectiveness, organizational authority matters.
A small number of organizations will use “L&D Showrunner” directly. This is the title that makes the concept visible — that signals, to everyone in the organization who encounters it, that the learning function is being run with a creative philosophy that is different from the service catalog model. It is also the title that requires the most internal explanation and generates the most skepticism in organizations that are not yet ready for what it represents.
The title matters less than the role description and the authority that backs it. A Showrunner with the wrong title and genuine creative authority will build something lasting. A Showrunner with the right title and no authority will accomplish nothing. Title the role in whatever way clears the organizational hurdle most efficiently, then build the authority through the work.
CHAPTER SIXTEEN
The First 90 Days
What the Showrunner does in their first quarter: the audit, the bible, the first cancellations, the first intentional decisions
A new Showrunner arrives with one thing the organization has not had before: a single point of creative accountability for its learning content.
Everything else — the content, the team, the stakeholder relationships, the production processes, the LMS full of accumulated decisions and accumulated debt — was already there. The Showrunner’s job in the first ninety days is not to change all of it. It is to understand it well enough to know what to change first, and to make a small number of visible decisions that establish what the role means in practice.
The temptation in a new role with broad creative authority is to change everything quickly. This is the wrong instinct. The Showrunner who arrives and immediately begins declaring what is wrong with the existing content, restructuring the team’s workflow, and declining stakeholder requests will generate resistance before they have built the credibility to sustain it. Creative authority in an organizational context is not granted by a job description. It is earned through demonstrated judgment, and demonstrated judgment takes time.
The first ninety days are about earning the right to run the show. What follows is a framework for doing that in a sequence that builds trust, establishes the standard, and produces early wins that make the larger transformation possible.
• • •
Days 1–30: The Audit
The Showrunner’s first month is almost entirely listening and observing. No major decisions. No rewrites. No declarations about what needs to change. Just an honest, comprehensive understanding of what already exists.
The content audit is the primary work of the first thirty days. The Showrunner watches or reads every significant piece of content the department has produced in the last two years — not to evaluate it against the standard they will eventually establish, but to understand the standard that has been operating implicitly. What is the voice the department has been using, intentionally or by default? What assumptions does the content make about the learner? What tensions does it resolve and what tensions does it avoid? What is the accumulated portrait of the organization that emerges from the full body of work?
The content audit is documented. Not as a quality assessment — not yet — but as an inventory of the creative decisions the department has been making. This documentation becomes the baseline against which future decisions will be compared. It is also the primary input for the bible, which the Showrunner will begin drafting in the second month.
Alongside the content audit, the Showrunner conducts a stakeholder audit: a series of conversations with the people who commission, review, and receive the department’s content. These conversations are not about gathering requirements. They are about understanding the political landscape of creative decisions — who has been driving them, what standards they have been applying, where the conflicts have been, and what the organization’s implicit theory of what learning content is for has been operating as. The stakeholder audit tells the Showrunner where the authority for creative decisions currently lives and what it will take to consolidate it.
The team audit runs parallel: conversations with every member of the L&D team about their work, their constraints, their frustrations, and their aspirations. The Showrunner is looking for the person described in Chapter Fifteen — the one who has been doing fragments of the Showrunner’s work without the title or authority. They are also listening for the frustrations that most closely map to the Showrunner gap: the inability to say no to a bad brief, the revision cycle that could have been prevented in design, the module that everyone knew wasn’t right but shipped anyway. These frustrations are the evidence that the role is needed. They are also the relationships the Showrunner will need to build quickly, because the team members who feel these frustrations most acutely will be the Showrunner’s earliest advocates.
The first thirty days are almost entirely listening. No major decisions, no rewrites, no declarations. The Showrunner who arrives and immediately begins declaring what is wrong generates resistance before they have built the credibility to sustain it.
One visible action is appropriate in the first thirty days: the Showrunner attends every production review meeting as an observer. Not a participant — an observer. They watch how decisions get made. Who has the final word. What questions get asked and what questions don’t. What the quality standard is in practice as opposed to on paper. They take notes. They say almost nothing. They are building the map they will need when they start making decisions.
• • •
Days 31–60: The Bible Draft and the First Standard
The second month is when the Showrunner begins to build.
The bible draft starts in week five, drawing directly from the content audit. The Showrunner has now experienced the accumulated body of work and has a clear picture of what the department’s implicit standards have been. The bible makes those standards explicit where they are worth keeping and replaces them where they are not. The drafting process is described in Chapter Fourteen. In the first ninety days, the goal is a working draft — specific enough to use in a review conversation, not yet final enough to publish formally.
While the bible is being drafted, the Showrunner introduces their first visible standard. Not a new process, not a new workflow, not a new form. One specific, nameable quality expectation that they will apply consistently from this point forward to every piece of content they review. The standard should be chosen carefully: specific enough to be checkable, important enough to matter, and connected to a failure mode the team already recognizes from the content audit.
A good first standard might be: every module must end with something unresolved rather than a summary. Or: every scenario must feature a character making a decision the learner would actually face, not a decision that is obviously right or wrong. Or: every piece of content must state why the subject matters before it states what the subject is. The specific standard matters less than the consistency with which it is applied. The team learns what the Showrunner cares about by watching them hold the line on something specific, repeatedly, without exception.
The second month is also when the Showrunner has their first content conversation with the CLO. Not a status update — a creative conversation. The Showrunner shares the content audit findings, not as a criticism of the team but as a description of the gap between the content the department is currently making and the content the organization needs. They share the bible draft and ask for the CLO’s reaction to its most significant positions. They surface the one or two places where the bible will require the CLO’s backing to hold against stakeholder pressure. This conversation is the beginning of the relationship that makes the Showrunner role sustainable. The CLO who understands what the Showrunner is building and why is the CLO who will back the bible when it matters.
• • •
Days 61–90: The First Decisions
The third month is when the Showrunner begins to act with creative authority. By this point they have built enough understanding of the organization, the team, and the stakeholder landscape to make consequential decisions without the risk of catastrophic misjudgment. The decisions they make in this month are chosen deliberately to establish what the role means in practice.
The first cancellation is the most important single action of the first ninety days. The Showrunner identifies one piece of content in the portfolio that no longer deserves to exist — not the most contentious one, not the one with the most powerful stakeholder attached to it, but one that is clearly past its useful life and that the organization will recognize as a reasonable choice to remove. They make the decision, communicate it with the Renewal Strategist’s framework — this content is no longer earning its place in the portfolio, here is why — and execute it without extended negotiation.
The first cancellation does two things. It establishes that the Showrunner’s authority over the portfolio is real and active, not theoretical. And it creates the template for every subsequent cancellation — the communication, the reasoning, the speed of execution that signals this is a managed decision rather than a surprise. The first cancellation is the one the organization watches most closely. Getting it right makes every subsequent one easier.
The first brief rejection is the second consequential action. Somewhere in the pipeline, there is a content request that does not meet the standard the Showrunner is building. Not a bad stakeholder — a bad brief. A request for content that addresses a problem content cannot solve, or that duplicates something that already exists, or that would require production investment that cannot be justified by the likely outcome. The Showrunner declines the request, clearly and with a specific reason, and offers an alternative: a different scope, a different approach, a different solution to the underlying problem that does not require the module as originally requested.
The brief rejection is more difficult than the cancellation because it involves a stakeholder who is present and has expectations. The Showrunner manages this by making the rejection feel like a service rather than a refusal. The message is not “we are not going to make this.” It is “we have looked at what you need and we think there is a better way to get there.” The better way may be simpler, cheaper, and more effective than what the stakeholder originally asked for. If the Showrunner has done the listening work of the first thirty days, they know enough about the stakeholder’s actual problem to propose something genuinely useful. The rejection becomes the beginning of a better brief.
The first brief rejection should feel like a service rather than a refusal. The message is not ‘we won’t make this.’ It is ‘we’ve looked at what you need and there is a better way to get there.’ That reframe is the difference between building trust and burning it.
The first flagship decision is the third action of the third month. The Showrunner identifies one project currently in the pipeline — or proposes one that is not yet in the pipeline — that deserves flagship investment. They make the case for it using the Production Economist’s framework: this is the behavioral outcome, this is the business impact of achieving it, this is the investment required to produce content that will achieve it. They get the CLO’s backing for the elevated investment and begin the production process with a level of creative attention that the department has not previously applied to a single project.
The flagship project in the first ninety days serves multiple purposes. It demonstrates what the Showrunner role produces when it is fully operative — the quality difference that results from having a single point of creative accountability on a project that matters. It gives the team a reference point for what the standard looks like at its highest expression. And it gives the organization something to point to when the Showrunner’s value is questioned, as it will be.
• • •
WHAT NOT TO DO
The first ninety days are as much about restraint as action. Several things that feel like the right move are not.
Do not redesign the team structure. The Showrunner does not yet know enough about the team’s capabilities, relationships, and informal dynamics to make good structural decisions. A reorganization in the first ninety days destroys the trust the Showrunner has been building through listening and makes the team feel managed rather than led. Structural decisions belong in month four or five, after the Showrunner has a clear picture of what the team needs to function well under the new model.
Do not launch the bible publicly before it is ready to be authoritative. A bible draft shared with the team before the Showrunner has the CLO’s backing and before the Showrunner has established enough credibility to defend it will be treated as a suggestion rather than a standard. Wait until the document is final, the CLO has explicitly endorsed it, and the Showrunner is ready to apply it consistently and immediately. A bible that is introduced and then not enforced is worse than no bible, because it establishes that the department’s standards are optional.
Do not try to fix everything at once. The content audit will surface more problems than can be addressed in ninety days, or nine hundred. The Showrunner who tries to address all of them immediately will exhaust the team, dilute their attention, and fail to make meaningful progress on any single front. Prioritize ruthlessly. The first ninety days are about establishing credibility and momentum, not completing the transformation. The transformation takes years. The first ninety days make it possible.
Do not underestimate the politics. The Showrunner role consolidates creative authority that was previously distributed across many people. Some of those people will feel the consolidation as a loss. They will not always say so directly. They will question specific decisions, raise concerns about process, suggest that the new approach is not accounting for important organizational nuances. The Showrunner who treats these challenges as political rather than substantive, and who responds with evidence rather than authority, navigates them. The Showrunner who treats them as obstacles to be overcome will create adversaries they cannot afford.
• • •
WHAT SUCCESS LOOKS LIKE AT DAY 90
At the end of the first ninety days, the Showrunner has not transformed the department. They have established the conditions for transformation.
The content audit is complete and documented. The bible exists in a working draft, has been shared with the CLO, and is in its final revision before publication. One piece of content has been cancelled. One brief has been declined and redirected. One flagship project is in production with elevated investment and creative attention. The team has seen one new standard applied consistently enough to internalize it. The CLO has been in a creative conversation with the Showrunner and has backed one decision publicly.
None of these things are visible to the organization in the way that a major relaunch or a new platform would be. They are the quiet work of establishing authority. They are the foundation on which everything that follows is built.
By day ninety, the team knows what the Showrunner cares about and what they won’t tolerate. The stakeholders have begun to understand that the L&D function has a point of view about what content is worth making. The CLO has a partner in building something lasting rather than a function to manage.
The Showrunner has begun to run the show.
What comes after — the second bible revision, the first full production slate, the first annual renewal cycle, the first cohort of designers who have been hired against the Showrunner’s standard rather than inherited from before it existed — is built on what was established in these first ninety days. The first quarter is not where the transformation happens. It is where the transformation becomes possible.
That is enough. It is, in fact, the whole job in miniature: not doing everything, but doing the right things first, in the right sequence, with enough restraint to let the work build on itself. That is what a Showrunner does. That is what running a show means.
CONCLUSION
The Show Must Go On
Here is what this book has argued.
Corporate learning content has a quality problem that is structural, not accidental. The problem is not budget, talent, tools, or methodology. It is the absence of a single role: a person with the creative authority and organizational standing to be accountable for whether the content is any good. That role exists in every other mature content operation. It does not exist, in most organizations, in L&D.
The role is the Showrunner. And the window for building it, before AI accelerates the content factory to the point where quality becomes impossible to recover, is now.
• • •
The eleven roles described in Part Two are not eleven separate hires. They are eleven dimensions of a single function — the creative and editorial leadership that a content operation requires in order to produce work that serves its audience rather than merely filling its platform. Some organizations will find a single person who holds most of these dimensions naturally. Others will build a small team that distributes them. Most will start with one person, one bible, and one standard, and build from there.
What matters is not the org chart. It is the answer to the question that runs through every chapter of this book: who is responsible for whether the content is any good?
If the answer is “everyone,” the answer is no one.
If the answer is a name, you have a Showrunner.
• • •
The AI argument bears repeating one more time, because it is the argument that makes this conversation urgent rather than merely interesting.
AI has not created the Showrunner gap. The gap existed long before the first content generation tool was released. What AI has done is remove the constraint that kept the gap from becoming catastrophic. When content was expensive and slow to produce, the effort of production was an imperfect filter on quality. Not enough of a filter — the content factory still produced too much content that didn’t work — but enough to prevent the worst outcomes at the worst scale.
That constraint is gone. The organizations building AI-accelerated content operations without Showrunner functions are not building efficient learning departments. They are building very fast content factories with no quality floor, producing material that their employees will recognize immediately as nobody’s best work and treat accordingly.
The organizations that build the Showrunner function first — that establish the taste, the bible, the creative authority, and the production slate before they scale the tools — will compound their advantage over time. Better content builds trust. Trust builds engagement. Engagement builds the behavioral change that is the only thing learning content is ultimately for. This compounding is available. It requires a decision to prioritize quality over volume, and a person to hold that priority accountable.
The organizations that build the Showrunner function before they scale the tools will compound their advantage over time. Better content builds trust. Trust builds engagement. Engagement builds the behavioral change that learning content is ultimately for.
• • •
There is one more thing worth saying before the book ends.
The Showrunner role is not primarily about content. It is about respect.
Every piece of content your L&D department produces is a message to the employee who receives it about how the organization thinks about their time, their intelligence, and their capacity to grow. Content that is generic, condescending, or clearly made without care for the person experiencing it sends a message. Content that is specific, honest, and made with real attention to the learner’s experience sends a different one.
The message compounds. An employee who has been through three years of learning content that respected their intelligence has a different relationship with their organization than one who has been through three years of content that treated them as a compliance risk to be managed. The first employee trusts that the organization takes their development seriously. The second has learned that learning, here, is something that happens to them rather than something done for them.
The Showrunner is the person who ensures the message is the right one. Not just in any single piece of content, but across the full body of work, over time, at scale. That is a creative role. It is also a human one.
Someone has to run the show.
If you don’t have that person yet, you know what you’re looking for.