A pilot is a small, deliberate launch to a curated group of users before the full rollout. It exists to surface the failure modes that don’t show up in admin testing — real learner behavior, real edge cases, real notifications hitting real inboxes. This guide covers how to run one well.
When to run a pilot
Run a pilot if any of the following are true:
- This is the first time the org has used Continu (always pilot)
- You’re adding a new content type, integration, or sync method
- You’re changing how users are provisioned (switching from manual to SFTP, adding SCIM, etc.)
- You’re rolling out to a population significantly different from existing users (new region, new business unit, external users)
- You’re launching a high-stakes compliance program
Skip the pilot only when you’re making a small change to an established setup — for example, adding a few articles to an already-running content library, or onboarding a small number of users to an existing flow. Even then, a smaller pilot (one or two users) is rarely a bad idea.
Pilot size — five to ten users
Pick five to ten users for the pilot. The sweet spot is:
- Enough variety to surface different behaviors (different roles, different locations, different content access)
- Small enough that you can talk to every pilot user personally and act on individual feedback
- Small enough that if something goes wrong, the blast radius is limited to a recoverable group
Fewer than five and you’ll miss patterns. More than ten and you start losing the per-person feedback that makes a pilot useful — at that point it’s a small rollout, not a pilot.
Picking pilot users
The pilot population needs to actually exercise the system. Mix:
Audience variety. If your full rollout will include three departments and two regions, pick at least one user from each. If your audience rules involve job profiles, locations, or custom fields, make sure at least one pilot user lives in each rule scope.
Engagement variety. Pick at least one user you expect to be enthusiastic, and at least one who’ll be skeptical or apathetic. Both signals are valuable. The enthusiastic user will find features to ask about; the skeptical user will tell you what feels broken or annoying.
Technical environment variety. If your org uses different operating systems, browsers, or mobile devices, get coverage across them. The Workshop video link that worked in Chrome on Mac may not work in Edge on Windows.
Admin participation. Include at least one admin or content creator in the pilot population — they’ll experience the system from both sides. Their feedback on the authoring and assignment flow is as valuable as the learner feedback.
Avoid: putting only members of the project team in the pilot. They’re too close to the work. They know what should happen, so they won’t notice when something doesn’t behave the way a fresh learner expects.
What to test during the pilot
The pilot is not just “did the users log in.” A complete pilot exercises every system path the full rollout will use. Test:
Provisioning and sync.
- Are pilot users created correctly through your real sync mechanism (SFTP, SCIM, manual upload)?
- Do their attributes (Department, Location, job profile, etc.) match what the source system holds?
- If a pilot user’s profile changes during the pilot — title change, location change — does the change reflect in Continu the next sync cycle?
Authentication.
- Can pilot users log in through your real SSO flow?
- Does the first-login experience work as expected — landing page, navigation, content visibility?
- Do users from each authentication path (internal SSO, external login, etc.) all work?
Content visibility.
- Do pilot users see the content they should see — based on Smart Segmentation, direct assignment, and Explore visibility?
- Just as important: do pilot users not see content they shouldn’t see? Sample a few content pieces that are out of scope for the pilot user’s audience and confirm they don’t appear.
Assignments and Automations.
- Do Automations fire for the right pilot users at the right time?
- If you have onboarding-day or anniversary triggers, simulate them with at least one pilot user.
Notifications.
- Do email notifications arrive in inboxes (not spam folders)?
- Do in-app notifications appear?
- If you’re integrated with Slack or Teams, do messages route to the right channels?
Workshops, Assessments, and Tracks/Journeys (if in scope).
- Can a pilot user register, attend, and complete a Workshop?
- Can they take and submit an Assessment, and does the result record correctly?
- Can they progress through a Track or Journey, and do completions track?
Reporting.
- Pull the standard reports you’d pull for a real cohort. Do they show what you’d expect?
- Pull a compliance report against the pilot cohort. Does the data reconcile against your source of truth?
How long to run
Two weeks is the typical pilot duration. The components:
- Days 1–3: Pilot users get access, complete first-day activities (sign in, take any required onboarding content, finish a first Assessment if applicable).
- Days 4–10: Pilot users use the system in their normal flow. Steady-state behavior.
- Days 11–14: Solicit and gather feedback. Run the pilot review.
Run shorter (one week) only if you’ve piloted before and the cohort is small and tightly observed. Run longer (three to four weeks) only if you have a compliance program with longer completion windows or you want to observe a recurring cycle (weekly Workshop, monthly compliance prompt) twice.
Pilot review
At the end of the pilot, run a structured review. Bring:
- The admin team
- A representative from the pilot user cohort (or two — one enthusiastic, one skeptical)
- Your Continu CSM
Review against this checklist:
Sync and provisioning
- All pilot users created with correct attributes
- No duplicate accounts created
- Status changes (mid-pilot) reflected correctly
- Daily sync ran consistently with no file failures
Authentication and access
- All pilot users logged in successfully
- No SSO errors
- Content visibility matches audience rules
- Audience exclusions also held (users didn’t see content they shouldn’t)
Content delivery
- Assignments fired as expected
- Automations triggered correctly
- Workshop / Assessment / Track completion recorded
- Notifications delivered (not in spam)
Feedback from pilot users
- Captured top three friction points
- Captured top three things that worked well
- Identified anything that surprised the user (good or bad)
Reporting and compliance
- Reports show expected data
- Compliance status reconciled against source of truth
- No data gaps or anomalies
The output of the review is a decision: proceed, extend, or roll back.
Proceed, extend, or roll back
After the review, one of three things happens:
Proceed to full rollout
Everything worked. The pilot users are using the system, completion and reporting are accurate, and there are no blocking issues. Plan the full rollout — see the section below.
Extend the pilot
There’s something to fix, but it doesn’t require structural changes. Maybe a notification setting needs adjusting, an Automation needs tuning, or a Smart Segment audience needs refining. Fix the issue, leave the pilot cohort in place, and run another week before the full rollout decision.
Roll back or restart
Something fundamental is wrong — sync isn’t working, audiences are wrong at a structural level, the wrong users are getting the wrong content. Pause the pilot, fix the underlying problem (which may mean revisiting field design, audience rules, or content scoping), and restart with the same cohort once the fix is in.
Roll-back is the right call more often than teams admit. A pilot that surfaced a real problem and got fixed before rollout is a successful pilot, even if it ran twice.
Going from pilot to full rollout
Once the pilot is signed off, the full rollout typically takes one of two shapes:
Big-bang rollout. All remaining users get access on the same day. Best when:
- The compliance program or training initiative needs to start synchronously across the org
- The pilot covered every audience type and behavior — no surprises expected
- You’ve communicated the launch date and need to hit it
Phased rollout. Users get access in batches — by department, by region, by tenure cohort. Best when:
- The audience is large and you want to keep blast radius small
- Different audiences need different communications or onboarding
- You want to capture each batch’s experience before adding the next
Either pattern, the messaging and communication should already be drafted (see the change management article) so the launch isn’t waiting on copywriting.
Common pitfalls
| Pitfall | Symptom | Fix |
|---|---|---|
| Pilot cohort is all project team members | No real-world feedback; “everything looks good” but breaks in rollout | Mix in users outside the project — including skeptics |
| Pilot only tests the happy path | Edge cases surface during full rollout instead of pilot | Deliberately test audience exclusions, status changes, edge cases |
| Pilot users aren’t told it’s a pilot | Confused users; feedback gets mixed with general support tickets | Brief pilot users explicitly: “you’re part of a pilot, here’s how to give feedback, here’s how long it lasts” |
| No formal review at the end | Issues spotted during pilot don’t get addressed before rollout | Schedule the pilot review meeting before the pilot starts |
| Pilot extended indefinitely | Pilot becomes the rollout by accident; never get full coverage | Set the pilot end date in the kickoff plan; commit to the decision at the end |
| Issues found in pilot, but rollout happens anyway because the date is fixed | Known issues hit every user; trust in the platform erodes | If the pilot review says fix-then-extend, fix-then-extend. Moving a launch date once is cheaper than rolling back a full launch. |
Pre-pilot checklist
- Pilot scope defined (which features, which audiences, which use cases)
- 5–10 users selected with audience, engagement, and environment variety
- At least one admin or content creator included in the pilot cohort
- Pilot users briefed: “this is a pilot, lasts [duration], please share feedback through [channel]”
- Sync mechanism tested with the pilot cohort (real path, not manual)
- SSO verified for pilot users
- Audience rules confirmed for pilot users (in scope and out of scope content checked)
- Notifications enabled and verified
- Pilot review meeting scheduled for the end of pilot
- Rollback plan documented in case the pilot surfaces a blocker
See Also
- Provisioning and Sync — for the broader user lifecycle the pilot validates
- HRIS Integration (via SFTP) - Field Guide — for the sync setup the pilot will exercise
- Running Multiple HRIS Sources Into One Continu Instance — if your pilot includes cross-source users