How Eddy — the conversational learning agent embedded in Continu — coaches your partners, customers, channel, franchisees, and employees through learning in dialogue, not in transactions. And where AI in your LMS quietly fails if you don't design around it.
Why a Conversational Learning Agent Matters
Eddy is the conversational learning agent built into Continu. It coaches your partners, customers, channel reps, franchisees, and employees through learning in dialogue — answering questions, walking through decisions, checking understanding, and pointing learners to the right next step in their program.
A conversational learning agent is structurally different from a chatbot or an FAQ. A chatbot answers a single question and ends. An FAQ is static text the learner has to find on their own. A learning agent stays in the conversation: it asks why the learner is stuck, probes understanding, adapts to what the learner says next, and acts inside the learning experience — pointing to the right module, surfacing the right resource, escalating to a human when needed.
For programs that span partner certification, customer onboarding, channel enablement, franchise compliance, and employee development, that difference shows up as scaled coaching capacity. Every learner has a coach available. Every program owner gets time back. Every content gap gets surfaced in the agent's logs instead of dying silently in a learner's frustration.
This guide is about designing Eddy into your programs intentionally — what jobs to give it, how to set it up for the dialogue it's built for, and where AI in your LMS fails if you don't design around it.
What Eddy Actually Is
Eddy is the conversational learning agent built into Continu. It lives inside the learning experience — alongside tracks, journeys, assignments, and content — and engages learners in dialogue grounded in your organization's content.
Strip away the marketing language and Eddy does three jobs.
Converse, don't transact. A partner is mid-track and hits a confusing concept. Instead of leaving the platform to ask a channel manager or search a docs site, they talk to Eddy. Eddy doesn't just spit back a paragraph — it asks what specifically is confusing, clarifies, checks understanding, and stays in the conversation until the learner is unstuck.
Coach, don't just answer. A customer admin asks Eddy how to approach a deployment decision. Eddy doesn't deliver a single "correct" answer — it walks them through the considerations, surfaces the relevant content, and prompts them to think through the choice themselves. Capability built through dialogue, not capability handed over as an answer.
Adapt, don't repeat. A franchise operator asks Eddy the same compliance question twice. Eddy doesn't just replay the same response — it adjusts based on what the learner already saw, drills in further if they're stuck, or points them to a different resource if the first one didn't land.
Eddy does not replace the human program. It replaces the friction — the dead-end "no answer," the slow channel-manager Slack, the help docs that never quite match the question — that used to stop learners from getting unstuck on their own.
The strategic question: what conversations do your learners need to have to succeed in your program, and which of those should Eddy hold instead of a human?
What "Conversational Learning Agent" Means in Practice
The phrase does real work. Each word matters.
Conversational. It's a dialogue, not a transaction. Eddy can ask the learner a question back. Eddy can clarify what the learner meant. Eddy can follow a thread across multiple turns. The interaction shape is "back-and-forth until the learner is unstuck," not "one query, one answer."
Learning. The point is capability, not lookup. Eddy isn't optimizing for the shortest response that satisfies the question — it's optimizing for the learner actually understanding the concept. That sometimes means asking the learner to explain it back. That sometimes means slowing them down before they click "next."
Agent. Eddy can act inside the learning environment, not just answer about it. It can point the learner to a specific module, suggest the right next track, surface a remediation resource, or hand off to a human when the conversation crosses what AI should handle alone.
The combined shape is a learning coach that scales — present for every learner in every program at every moment they need it, drawing on your content as the source of truth.
What the AI Features in Continu Actually Do
Beyond Eddy itself, Continu's AI capabilities show up in three places.
Content authoring assist. When you're creating a quiz, drafting a track description, or summarizing a long-form article, AI can produce a first version. The intent is to accelerate the SME, not to replace the SME. The SME edits, fact-checks, and shapes — but they're not staring at a blank page.
Discovery and recommendations. AI improves the matching between a learner's role, history, and stated needs and the right content in your library. Especially valuable in large libraries where titling and tagging discipline has slipped.
Summaries and translations. Long content gets summarized. Content in one language gets a first-draft translation. The work is faster; the human review is still required.
These tools sit alongside Eddy. The conversational learning agent is the headline. The authoring and discovery features are the supporting cast that makes the agent work — by making your content easier to create, easier to find, and easier for Eddy to draw from.
Where Eddy Has Limits
Designing Eddy in well means designing around what AI does not do.
AI doesn't decide what should be in your program. Eddy can summarize what's there. It can converse about it. It cannot decide which capabilities matter for your partner certification, what the pass mark on your compliance assessment should be, or how your channel program tiers map to enablement requirements. That work stays with the program owner.
AI doesn't replace the SME. A drafted quiz from AI is a starting point, not an output. If the question is wrong about the product, the AI confidently delivers a wrong question. If the explanation is subtly off, the AI confidently delivers a subtly-off explanation. The SME's review is what makes the artifact trustworthy.
A conversational learning agent is only as good as the content it's grounded in. Eddy converses from your content. If your content is out of date, Eddy's conversation is out of date. If your content contradicts itself, Eddy may pull from either side. Content hygiene is part of running a conversational learning program.
AI doesn't substitute for clear program strategy. If a program lacks a defined job ("verify partner readiness to sell," "onboard customer admins to deployment," "make franchise operators audit-ready"), Eddy cannot generate one. The agent scales whatever strategy is already in place — clear strategy or unclear strategy.
The practical framing: Eddy removes friction from execution and adds coaching at scale. The instructional design and program strategy still belong to humans.
Best Practices
Design Eddy's job before you turn it on. What conversations do partners, customers, and channel reps need to have that today get routed to humans, get a slow answer, or never happen at all? Those are Eddy's first jobs. Map them explicitly.
Treat your content library as Eddy's training set. The hygiene practices that always mattered — accurate, current, well-structured content — now matter twice. Stale content makes Eddy's conversations unreliable. Duplicate content makes Eddy ambiguous. Orphaned content gives Eddy nothing to draw on.
Design for dialogue, not for FAQ. Don't write your content as if Eddy will quote it verbatim. Write content that Eddy can converse from — concepts explained clearly, decision frameworks the agent can walk a learner through, examples the agent can adapt to the learner's situation.
Make AI-assisted authoring a draft starter, not a draft finisher. Use the content authoring assist to produce a first version, then put a human SME between that draft and publication. Never publish AI output unedited; the speed gain is real and so is the accuracy risk.
Pilot with a contained audience. Roll Eddy out to one partner cohort, one customer segment, or one new-hire group first. Watch what they ask. Watch what conversations Eddy handles well and where it stumbles. Tune the content library based on what you learn before you scale.
Set expectations explicitly with learners. Tell partners and customers what Eddy is, what it can converse about, and when to escalate to a human. A learning agent without stated boundaries leads to misuse — learners assuming Eddy can answer commercial questions, refund questions, sales-process questions it has no business touching.
Watch the unanswered-and-misanswered conversations. The most valuable metric is not "how many conversations did Eddy have?" — it's "what conversations did Eddy not handle well?" Those are content gaps. Those are also the questions your humans were already failing to answer; you just didn't have a log of them before.
Pair Eddy with human escalation paths. Eddy handles the routine coaching. The non-routine should escalate cleanly to a human — partner manager, customer success, internal program owner. Make the escalation path one click, not a dead-end "I don't know."
Trust but verify, especially for compliance content. AI conversations are confidence-inducing by design — they sound authoritative even when wrong. For high-stakes content (compliance, safety, regulatory, security), insist on human-validated content as the source and human-reviewed surfaces as the answer. Eddy can speed up coaching on safety topics; Eddy cannot be the only voice on them.
Let Eddy ask the learner questions, not just answer them. The best conversational learning agents probe — "what part of this is unclear?" "have you tried X first?" "before I tell you, what do you think the answer is?" Design content (and Eddy's prompts) so this kind of pedagogical conversation can happen, not just one-way response.
Anti-Patterns
Treating Eddy like a chatbot. Designing the integration as if Eddy will answer one question and end the conversation. The learner gets a curt response, doesn't get coached, gives up. The conversational learning agent has been reduced to a slow search bar.
The AI hammer. Treating AI as the answer to every problem in the program. Low engagement? Add AI. Confused partners? Add AI. Compliance gaps? Add AI. Eddy is a multiplier, not a remedy. Multiplying a broken program produces a faster broken program.
Letting AI write the program. Asking AI to draft an entire certification — content, assessment, pass mark, retake policy, certificate language — and shipping the result without instructional design review. The program will look polished and be subtly hollow. Partners and customers will sense it.
Skipping content hygiene because Eddy "will sort it out." Eddy is not a librarian. It cannot disambiguate two articles that contradict each other; it can only converse from one of them. It cannot know that the 2022 version of your onboarding guide is obsolete unless you tell it. The cleanup work is still on the human team.
Conversation metrics in isolation. Reporting "Eddy held 1,200 conversations this month" without asking what those conversations were about, whether they ended in resolution, or whether the same topics kept recurring (which would mean a content gap, not an AI win).
Translating with AI and shipping without review. First-draft translations are valuable; first-draft translations are not finished translations. Shipping AI-translated content without a native-speaker review will be visible to your audience — and what they see is that you don't take their language seriously.
Letting Eddy converse about topics it has no source for. Asking Eddy about pricing, contract terms, sales motions, or competitive positioning when those topics aren't in your content library. The agent may produce something conversational. That something may be wrong, and a partner may quote it back to a customer.
Treating "AI is in the product" as a marketing checkbox. Listing Eddy in your program announcement without designing the program around what conversational learning enables. The feature exists; the value doesn't materialize.
In the Continu Architecture
Eddy is woven through every other object — that's the design.
- Content. Eddy converses from your articles, videos, tracks, and journeys. Better content means a better conversational agent. AI authoring assist produces drafts that flow into the same content objects Eddy will later draw on.
- Tracks and Journeys. Eddy can coach inside a track, help a learner unstick mid-journey, and point them at the right next module — adapting the conversation to where they are in the program.
- Smart Segmentation. Eddy's surface and scope can be configured by segment — partner-tier-aware coaching, role-aware conversations, audience-specific guidance.
- Assignments. When a learner is assigned a program, Eddy is available throughout it. The conversational coaching is part of the program experience, not a separate destination.
- Reporting. Eddy's usage data — what's being asked, what conversations resolve, where the gaps are — feeds the same reporting layer that tracks the rest of the program.
Eddy is not a side feature. Designed well, it's a coaching layer that runs through the whole program experience.
External Audience Patterns
Partner enablement. Eddy holds the coaching conversations partners had to have with their channel manager. "Walk me through the discovery questions for this product." "What's the integration scope for a mid-market deployment?" "How do I handle this objection?" Faster coaching, less channel-manager bottleneck, partners staying in flow during the sales motion. Configure Eddy's content scope to partner-appropriate material — keep internal-only content out.
Customer education. Customer admins converse with Eddy about the product they're deploying. The questions that used to land in your support queue now resolve inside the LMS, in dialogue. The customer feels coached, not transacted with. The support team focuses on the conversations Eddy can't handle.
Channel education at scale. Eddy works at any volume. A 5,000-reseller channel can each have a personal learning coach for product knowledge without scaling your channel-enablement team to 5,000 humans. Watch the unanswered-conversation log to find the content gaps the volume reveals.
Franchise operations. A franchise operator converses with Eddy about a specific operational situation. Eddy walks them through the relevant SOP, surfaces the compliance section, and points them at the right human contact for escalation. Especially valuable in geographically dispersed franchise networks where coaching can't scale by phone.
Customer onboarding. New customer admins, especially in self-serve or low-touch motions, get an in-product learning coach. The implementation accelerates because they're not waiting for human help. The success team sees deployment readiness without becoming the bottleneck.
Member or community education. Members of an association, certification body, or community converse with Eddy about benefits, courses, requirements. The members feel served by a coach; the staff handles the human work that requires human judgment.
Internal Audience Patterns
New hire ramp. A new hire converses with Eddy about what most new hires need to learn in week two. They get coached in seconds instead of slacking a manager and waiting an hour. Manager time is preserved for the conversations that actually need a manager.
Compliance refreshers. "Walk me through the policy on X again." Eddy doesn't just paste the policy — it conversationally walks the employee through it, checks understanding, points to the right escalation. The HR or compliance team gets a log of which policies generate the most coaching needs — that's the data that should drive next year's training.
Sales enablement. Sales reps converse with Eddy about product, competitive, and process topics during deal cycles. The enablement team's value moves from answering the same question across 30 reps to designing the content Eddy coaches from.
Manager development. Managers in a leadership development program use Eddy as a coaching partner — "how should I think about this 1:1?" "how do I have this performance conversation?" — drawing on the org's leadership content in dialogue.
Known Behaviors and Limits
Eddy converses from your content only by default. This is the design — conversations grounded in your sources, not the open internet. Where your content has a gap, Eddy may flag the gap or may stretch. Audit the gap log to know where the gaps are.
AI confidence is constant; AI accuracy is variable. A wrong response and a right response sound equally confident. Plan for review on high-stakes content. Plan for the human escalation path on conversations Eddy mishandles.
Conversational memory has limits. Eddy can hold context within a conversation, but plan for what happens when the learner returns the next day — is the context still there, or are they starting fresh? Design the program experience around either model intentionally.
Content updates take effect at indexing speed, not instantly. When you update an article, Eddy may continue to draw on the older version briefly until the index refreshes. For time-sensitive policy or product changes, plan the publish-to-index window.
Multimedia is partially handled. AI handles text natively and is improving on video and image; a transcript-rich content library helps Eddy converse better than a transcript-less one. Captions and transcripts are now agent-readable assets, not just accessibility assets.
Privacy and data scope is configurable but not automatic. Decide what content Eddy can read, what data Eddy logs, and what visibility your admin team has into learner conversations. Make these decisions explicitly during rollout, not after the first incident.
Translation drafts are starts, not finishes. The translation features speed up localization but do not finish it. A native review is still part of any localized program shipping to partners or customers in another language.
Content authoring assist follows you, not you follow it. The AI drafts what you prompt it to draft. Vague prompts produce vague drafts. Specific prompts (audience, job, length, tone) produce useful drafts. The skill is now in prompting, not just in writing.
Eddy's helpfulness is bounded by your content's organization. A library with clear titles, current content, removed duplicates, and consistent terminology produces a much better conversational learning agent than a sprawling, contradictory library. Content hygiene became more important the day Eddy joined the team.
Where to Go Next
- Content Strategy: Designing Learning Assets That Scale — for the content discipline that makes Eddy work.
- Smart Segmentation: Designing Populations That Maintain Themselves — for scoping Eddy's conversations to the right audiences.
- Assessments: Designing Knowledge Checks That Earn Their Cost — for the human work AI cannot do.
- Tracks and Journeys: Designing Learning Paths — for where Eddy lives inside the learning experience.
- Reporting: Which Report Should I Use? — for tracking Eddy's conversations and content gaps.
Design first. Click second. Use a conversational learning agent to coach at scale, not to replace judgment.