How Eddy and the AI features in Continu help your partners, customers, channel, franchisees, and employees find answers, accelerate creation, and stay unstuck — and where AI quietly fails if you don't design around it.
Why AI in Learning Matters
The conversations about AI in the LMS market tend to go in two directions, both of them wrong.
The first is hype. AI will replace instructional designers. AI will write your courses. AI will know what every learner needs and deliver it perfectly. None of this is true today, and pretending it is sets your program up to disappoint.
The second is dismissal. AI is just autocomplete. AI is a gimmick. The good content is still hand-written. This is also wrong — it ignores the very real, very specific places where AI is already changing how learning programs run.
The truth is in the middle, and it's actionable: AI in Continu is a force multiplier for things that humans have always struggled with at scale. Answering the same question 200 times. Surfacing the right piece of content out of a library nobody can navigate. Drafting a first version of a quiz so the SME has something to react to instead of starting from blank. Coaching a learner through a stuck moment without paging the program owner.
Eddy is Continu's AI surface — the assistant your learners and creators interact with. The features around it (AI-assisted content generation, smart recommendations, AI-powered search) extend that capability into the authoring and discovery experience.
This guide is about using all of that without losing the plot of what your learning program is for.
What Eddy Actually Is
Eddy is an AI assistant embedded in Continu that answers learners' questions using your organization's content as the source of truth.
Strip away the marketing language and Eddy does three jobs.
Answer questions. A partner is mid-track and hits a confusing concept. They ask Eddy. Eddy returns an answer grounded in your content — not the entire internet, not a generic LLM response, but the actual material you've published into Continu.
Surface content. A customer admin is looking for the right reference doc and can't remember its title. They ask Eddy what they need. Eddy points them at the right article, video, or track — even if their search terms don't match the content's literal title.
Coach in context. A learner is stuck on a step in a journey or a concept inside a track. Eddy can clarify, rephrase, or point them to a remediation resource without escalating to a human.
Eddy does not replace the human program. It replaces the friction that used to stop learners from getting unstuck on their own.
The strategic question: what questions do your learners ask most often, and which of those should be answered by Eddy instead of by your program owner, support team, or partner manager?
What the AI Features in Continu Actually Do
Beyond Eddy itself, Continu's AI capabilities show up in three places.
Content authoring assist. When you're creating a quiz, drafting a track description, or summarizing a long-form article, AI can produce a first version. The intent is to accelerate the SME, not to replace the SME. The SME edits, fact-checks, and shapes — but they're not staring at a blank page.
Discovery and recommendations. AI improves the matching between a learner's role, history, and stated needs and the right content in your library. Especially valuable in large libraries where titling and tagging discipline has slipped.
Summaries and translations. Long content gets summarized. Content in one language gets a first-draft translation. The work is faster; the human review is still required.
These are tools. Like every tool, they work brilliantly when designed into a workflow with intent — and produce noise when bolted on as features for their own sake.
What AI Doesn't Do (and Pretending It Does Will Hurt You)
This is the part most AI-in-LMS conversations skip.
AI doesn't know what should be in your program. It can summarize what's there. It can answer questions about it. It cannot decide which capabilities matter for your partner certification, what the pass mark on your compliance assessment should be, or how your channel program tiers map to enablement requirements. That's instructional design. That's still human work.
AI doesn't replace the SME. A drafted quiz from AI is a starting point, not an output. If the question is wrong about the product, the AI confidently delivers a wrong question. If the explanation is subtly off, the AI confidently delivers a subtly-off explanation. The SME's review is what makes it trustworthy.
AI is only as good as the content it's grounded in. Eddy answers from your content. If your content is out of date, Eddy's answer is out of date. If your content contradicts itself, Eddy may surface either side. The discipline of content hygiene now has a new beneficiary — the AI that's reading it.
AI doesn't fix unclear strategy. If your program doesn't have a clear job ("verify partner readiness to sell," "onboard customer admins to deployment," "make franchise operators audit-ready"), AI will not produce one for you. It will produce noise faster.
The honest framing: AI removes friction from execution. It does not produce strategy. The teams that win with AI in their LMS are the ones that already had clear strategy and used AI to scale it.
Best Practices
Design Eddy's job before you turn it on. What kinds of questions are partners, customers, and channel reps asking that today get routed to humans, get a slow answer, or never get an answer at all? Those are Eddy's first jobs. Map them explicitly.
Treat your content library as Eddy's training set. The hygiene practices that always mattered — accurate, current, well-structured content — now matter twice. Stale content makes Eddy unreliable. Duplicate content makes Eddy ambiguous. Orphaned content gives Eddy nothing to point at.
Make AI-assisted authoring a draft starter, not a draft finisher. Use the content authoring assist to produce a first version, then put a human SME between that draft and publication. Never publish AI output unedited; the speed gain is real and so is the accuracy risk.
Pilot with a contained audience. Roll Eddy out to one partner cohort, one customer segment, or one new-hire group first. Watch what they ask. Watch where Eddy gets it right and where it gets it wrong. Tune the content library based on what you learn before you scale.
Set expectations explicitly with learners. Tell partners and customers what Eddy is, what it can answer, and when to escalate to a human. AI without stated boundaries leads to misuse — learners assuming Eddy can answer commercial questions, refund questions, sales-process questions it has no business touching.
Watch the unanswered-and-misanswered questions. The most valuable AI metric is not "how many questions did Eddy answer?" — it's "what couldn't Eddy answer well?" Those are content gaps. Those are also the questions your humans were already failing to answer; you just didn't have a log of them before.
Pair AI with human escalation paths. Eddy answers the routine. The non-routine should escalate cleanly to a human — partner manager, customer success, internal program owner. Make the escalation path one click, not a dead-end "I don't know."
Trust but verify, especially for compliance content. AI answers are confidence-inducing by design — they sound authoritative even when wrong. For high-stakes content (compliance, safety, regulatory, security), insist on human-validated content as the source and human-reviewed answers as the surface. AI can speed up answering safety questions; AI cannot be the only answer to them.
Anti-Patterns
The AI hammer. Treating AI as the answer to every problem in the program. Low engagement? Add AI. Confused partners? Add AI. Compliance gaps? Add AI. AI is a multiplier, not a remedy. Multiplying a broken program produces a faster broken program.
Letting AI write the program. Asking AI to draft an entire certification — content, assessment, pass mark, retake policy, certificate language — and shipping the result without instructional design review. The program will look polished and be subtly hollow. Partners and customers will sense it.
Skipping content hygiene because Eddy "will sort it out." Eddy is not a librarian. It cannot disambiguate two articles that contradict each other; it can only surface one of them. It cannot know that the 2022 version of your onboarding guide is obsolete unless you tell it. The cleanup work is still on the human team.
Hiding Eddy behind a feature flag and not telling anyone. Turning Eddy on for partners or customers without rollout communication, training, or boundaries. Confused users assume the AI is part of the broken software, not a separate tool with separate strengths.
AI metrics in isolation. Reporting "Eddy answered 1,200 questions this month" without asking what those questions were, whether the answers were correct, or whether the same questions kept recurring (which would mean a content gap, not an AI win).
Translating with AI and shipping without review. First-draft translations are valuable; first-draft translations are not finished translations. Shipping AI-translated content without a native-speaker review will be visible to your audience — and what they see is that you don't take their language seriously.
Letting AI answer questions it has no source for. Asking Eddy about pricing, contract terms, sales motions, or competitive positioning when those topics aren't in your content library. The AI may produce something. That something may be wrong, and a partner may quote it back to a customer.
Treating "AI is in the product" as a marketing checkbox. Listing Eddy in your program announcement without designing the program around what Eddy enables. The feature exists; the value doesn't materialize.
In the Continu Architecture
AI in Continu touches every other object — that's the design.
- Content. Eddy reads from your articles, videos, tracks, and journeys. Better content means a better Eddy. AI authoring assist produces drafts that flow into the same content objects.
- Tracks and Journeys. Eddy can coach inside a track, help a learner unstick mid-journey, and point them at the right next module.
- Smart Segmentation. Eddy's surface and scope can be configured by segment — partner-tier-aware help, role-aware coaching, audience-specific answers.
- Assignments. When a learner is assigned a program, Eddy is available throughout it. The AI's helpfulness is part of the program experience, not a separate destination.
- Assessments. AI-assisted question drafting accelerates assessment authoring; the SME still owns the final question, the pass mark, the retake policy.
- Reporting. AI usage data — what's being asked, what's being answered well, what's being missed — feeds the same reporting layer that tracks the rest of the program.
Eddy is not a side feature. Designed well, it's a thread that runs through the whole program experience.
External Audience Patterns
Partner enablement. Eddy answers the questions partners had to ask their channel manager. "How does the trial work?" "What's the integration scope?" "Where's the latest objection-handling deck?" Faster answers, less channel-manager bottleneck, partners staying in flow during the sales motion. Configure Eddy's content scope to partner-appropriate material — keep internal-only content out.
Customer education. Customer admins ask Eddy about the product they're deploying. The questions that used to land in your support queue now resolve inside the LMS. The customer feels self-sufficient. The support team focuses on the questions Eddy can't answer.
Channel education at scale. Eddy works at any volume. A 5,000-reseller channel can each have a personal AI assistant for product knowledge without scaling your channel-enablement team to 5,000 humans. Watch the unanswered-question log to find the content gaps the volume reveals.
Franchise operations. A franchise operator asks Eddy how to handle a specific operational situation. Eddy points them at the SOP, summarizes the relevant compliance section, surfaces the right contact for escalation. Especially valuable in geographically dispersed franchise networks where local-question, local-answer can't scale by phone.
Customer onboarding. New customer admins, especially in self-serve or low-touch motions, get an in-product AI tutor. The implementation accelerates. The success team sees deployment readiness without becoming the bottleneck.
Member or community education. Members of an association, certification body, or community ask Eddy about benefits, courses, requirements. The members feel served; the staff handles the human work that requires human judgment.
Internal Audience Patterns
New hire ramp. A new hire asks Eddy what most new hires ask in week two. They get an answer in seconds instead of slacking a manager and waiting an hour. Manager time is preserved for the questions that actually need a manager.
Compliance refreshers. "What's the policy on X again?" Eddy answers from the policy library. The employee gets unblocked. The HR or compliance team gets a log of which policies generate the most questions — that's the data that should drive next year's training.
Sales enablement. Sales reps ask Eddy product, competitive, and process questions during deal cycles. The enablement team's value moves from answering the same question across 30 reps to designing the content Eddy answers from.
Manager development. Managers in a leadership development program use Eddy as a coaching prompt — "what should I ask in this 1:1?" "how do I have this performance conversation?" — drawing on the org's leadership content.
Known Behaviors and Limits
Eddy answers from your content only by default. This is the design — answers grounded in your sources, not the open internet. Where your content has a gap, Eddy may say so or may stretch. Audit the gap log to know where the gaps are.
AI confidence is constant; AI accuracy is variable. A wrong answer and a right answer sound equally confident. Plan for review on high-stakes content. Plan for the human escalation path on questions Eddy mishandles.
Content updates take effect at indexing speed, not instantly. When you update an article, Eddy may continue to surface the older version briefly until the index refreshes. For time-sensitive policy or product changes, plan the publish-to-index window.
Multimedia is partially handled. AI handles text natively and is improving on video and image; a transcript-rich content library helps the AI work better than a transcript-less one. Captions and transcripts are now AI-readable assets, not just accessibility assets.
Privacy and data scope is configurable but not automatic. Decide what content Eddy can read, what data Eddy logs, and what visibility your admin team has into AI conversations. Make these decisions explicitly during rollout, not after the first incident.
Translation drafts are starts, not finishes. The translation features speed up localization but do not finish it. A native review is still part of any localized program shipping to partners or customers in another language.
Content authoring assist follows you, not you follow it. The AI drafts what you prompt it to draft. Vague prompts produce vague drafts. Specific prompts (audience, job, length, tone) produce useful drafts. The skill is now in prompting, not just in writing.
Eddy's helpfulness is bounded by your content's organization. A library with clear titles, current content, removed duplicates, and consistent terminology produces a much better Eddy than a sprawling, contradictory library. Content hygiene became more important the day AI joined the team.
Where to Go Next
- Content Strategy: Designing Learning Assets That Scale — for the content discipline that makes AI work.
- Smart Segmentation: Designing Populations That Maintain Themselves — for scoping Eddy to the right audiences.
- Assessments: Designing Knowledge Checks That Earn Their Cost — for the human work AI cannot do.
- Tracks and Journeys: Designing Learning Paths — for where Eddy lives inside the learning experience.
- Reporting: Which Report Should I Use? — for tracking AI usage and content gaps.
Design first. Click second. Use AI to remove friction, not to replace judgment.