Reporting: Which Report Should I Use?

A guide to answering the business questions that matter — partner readiness, customer adoption, channel certification, compliance posture, program health — using the right report, scoped to the right population, with metrics you can actually trust.


Why Reporting Matters

Reporting is how you know whether the learning is working.

Without reports, every learning program is a leap of faith. Did the partners actually get certified? Did the customer admins finish onboarding? Did the channel reps complete their compliance training? Are the programs you've designed actually changing behavior — or just generating activity?

Reports answer those questions. But only if you ask the right one of the right report.

The most common reporting failure is not technical. It is conceptual. An admin reaches for a report, sees a number, and assumes it answers their question — when in fact the report is answering a different question, scoped to a different population, with a different definition of the metric.

This guide is about closing that gap. Define the business question first. Pick the report that matches it. Understand how the metric is calculated. Then read the result.

That sequence is the difference between reporting that informs decisions and reporting that misleads them.


What Reporting Actually Is

Reporting in Continu is not a separate product. It is a window into the same objects you have already built — Users, Content, Assignments, Workshops, and the state changes that happen across them.

Three types of reports cover most of what learning leaders need:

  • Assignment-state reports read the state machine on assignments — Not Started, In Progress, Completed, Expired, Removed. Most "did people finish this?" questions are answered here.
  • Workshop-attendance reports read the workshop state machine — Registered, Waitlisted, Attended, No-Show, Cancelled. Most "who showed up?" questions are answered here.
  • Engagement reports read activity — logins, content views, time spent, search behavior. Most "is the platform actually being used?" questions are answered here.

If your business question maps to one of these three types, you are close to the right report. If it doesn't, you may need to combine reports — or rethink what you're actually trying to measure.


The Three Questions Reporting Answers

At the highest level, every reporting need maps to one of three questions:

  1. Are people getting it? Are the right learners receiving the right assignments at the right time? This is a delivery question. Reports here measure assignment creation, scope, and audience accuracy.
  2. Are people doing it? Once assigned, are learners completing the work? This is an engagement and completion question. Reports here measure state progression, completion rates, attendance, and time-to-completion.
  3. Did it work? Did the learning produce the outcome you cared about — capable partners, certified channel reps, compliant employees, capable customer admins? This is an effectiveness question. Reports here are harder to read directly and usually require pairing learning data with business outcomes (revenue, support ticket volume, certification audit results).

Most admins reach for completion rate when they actually want effectiveness. Completion is a proxy. It tells you the assignment was finished. It does not tell you the learning landed.

The strategic question: which of the three am I really asking? If the answer is fuzzy, sharpen it before you pick a report.


Common Business Questions and Which Report Answers Them

The single most useful reporting reference is a map from question to report. Here are the questions we hear most often:

"Who has not completed required training?" → Assignment Status Report, filtered to Not Started and Expired states, scoped by Smart Segmentation to the required population. The most common compliance question.

"Who attended the workshop?" → Workshop Attendance Report, filtered to the Attended state. For certification programs, this is the report of record.

"How many partners are certified?" → Track Progress Report or Assignment Status Report, filtered to Completed state, scoped to the certification Track and the partner segment. Confirm the certification's completion criteria match the report's definition.

"Are customers in onboarding finishing on time?" → Assignment Status Report, scoped to the customer onboarding segment, with a time filter on completion events. Use Time-to-Completion as the secondary metric.

"Where is the program stuck?" → Assignment Status Report, grouped by content within the Track. The content piece with the lowest progression rate is your bottleneck.

"Who's at risk?" → Assignment Status Report, filtered to Expired or Overdue states, sorted by days overdue. For external audiences, pair with a partner-manager or CSM rollup so the right human can intervene.

"How is my team progressing?" → Manager / Team Report, scoped by direct reports, showing assignment status and workshop attendance for the manager's team.

"Are we engaging the customer admin community?" → Engagement Report, scoped to the customer admin segment, showing logins, content views, and search behavior over time. Pair with completion data for a fuller picture.

"What does the compliance auditor need to see?" → Assignment History Report, scoped to the regulated population, with all state transitions visible. Auditors typically want to see the full trail, not just the current state.

"Did the launch reach everyone we intended?" → Assignment Status Report scoped to the launch segment, filtered to assignment creation events within the launch window. Confirm that everyone in the segment received the assignment before evaluating completion.

"Which content is being used and which is collecting dust?" → Content Engagement Report, sorted by views, completions, or time spent. Useful for retiring stale content and doubling down on what's working.

If your question is not on this list, decompose it. Most reporting questions are some combination of these.


How Metrics Are Calculated

Numbers without definitions mislead. The most common reporting failures come from admins reading metrics whose calculation they don't fully understand. Here's what's underneath the metrics you see most often.

Completion Rate. The percentage of assignments in a Completed state, scoped to a defined population. The formula is simple: completed assignments ÷ total assignments × 100. The hard part is the population. "Tier 1 partners" Completion Rate and "All partners" Completion Rate are very different numbers. Always check the scope before reacting to the number.

Attendance Rate. The percentage of registered learners who actually attended a workshop. Calculated as Attended ÷ (Attended + No-Show). Note that this excludes cancelled registrations — a learner who cancelled in advance does not count against attendance. Some teams prefer Attendance ÷ All Registered, which is a stricter metric. Confirm which definition your reports use.

No-Show Rate. Inverse of attendance — No-Show ÷ (Attended + No-Show). A no-show rate above 30% usually indicates a reminder problem, a scheduling problem, or a value problem. Diagnose before optimizing.

Engagement Rate. Activity-based, and the most variable metric across organizations. May be calculated as logins per period, content views per active learner, time spent in the platform, or some weighted combination. Always confirm the formula your team uses before drawing conclusions. Engagement is the easiest metric to gamify and the hardest to interpret.

SCORM Completion. SCORM packages report completion based on logic embedded in the package itself. A SCORM module may be marked Complete when the learner reaches the final slide, when they pass an embedded assessment, when they've spent a minimum time, or when they trigger a specific completion call. Continu records what SCORM tells it. If completion is showing up wrong, the issue is usually inside the SCORM package, not Continu.

Time-to-Completion. Average days from assignment creation to the assignment moving to a Completed state. A useful metric for diagnosing program friction — long times-to-completion suggest content is too long, too hard, or insufficiently prioritized. Watch for outliers; a few learners who completed months late can skew the average dramatically.

Overdue Rate. Percentage of active assignments past their due date and not yet Completed. Calculated as (Expired + Past-Due-Not-Started + Past-Due-In-Progress) ÷ (All Active Assignments). High overdue rates signal due-date design problems or audience-fit problems.

Pass / Fail Rate (for assessments). Percentage of attempts that met the passing threshold. Note: an assessment with multiple attempts allowed will show different rates depending on whether you measure first attempt, best attempt, or last attempt. Different definitions, different numbers.

If a metric you're looking at doesn't appear here, find its definition before sharing it. A metric without a definition is a guess in a confidence costume.


The Anatomy of a Report

Every report has four moving parts. Get them right and the report tells the truth. Get them wrong and it lies.

  • Population. Who is in scope. Almost always defined by a Smart Segmentation rule. Wrong population, wrong number — every time.
  • Time filter. What window the report covers. Two important nuances: time filters apply to events (state changes, attendance, logins), not to assignment creation dates. A "completed in last 30 days" filter looks at when the state changed to Completed, not when the assignment was created. A "created in last 30 days" filter does the opposite.
  • State filter. Which assignment or workshop states count. The most common reporting bug is forgetting to exclude Removed assignments — they show up in some default views and inflate denominators.
  • Aggregation. How the data is grouped. By user, by content, by manager, by location, by time period. Different aggregations expose different stories from the same underlying data.

A well-formed report reads as: "[metric] for [population] over [time filter] in [state filter] grouped by [aggregation]." If you can't say it cleanly, you can't read it cleanly.


Best Practices

Habits worth internalizing for every reporting workflow:

Define the question first. Open a notebook, not a report. Write the question in plain English. "How many Tier 1 partners completed advanced certification in Q3?" Once the question is sharp, picking the report becomes obvious.

Match the population to the question. A report scoped to "all users" rarely answers a real business question. Almost every meaningful report is scoped to a Smart Segmentation rule that defines the population precisely.

Understand state vs. event time filters. Read every time filter twice before trusting the result. Was the assignment created in this window, or did the state change in this window? They are not the same.

Don't compare apples to oranges across reports. Two reports with similar names but different scopes, time filters, or state definitions will produce different numbers. Both can be correct. Both can mislead.

Save and document custom reports. When you build a report that answers a question well, save it. Add a description that names the question, the scope, and the metric definition. Future-you and your team will thank you.

Read trends, not snapshots. A single completion rate is a number. A completion rate over six months is a story. Programs improve and decline over time; trends show direction in a way snapshots cannot.

Pair quantitative with qualitative. Completion rate alone is a weak signal. Pair it with at least one qualitative input — a survey, a manager check-in, a sample of learners describing what they took away. Numbers tell you what happened; humans tell you why.

Calibrate stakeholder expectations. Different stakeholders want different reports. Executives want rollups and trends. Managers want team-level visibility. Compliance wants audit trails. Build the report once for the audience that needs it, not one report for everyone.

Schedule a reporting review cadence. Once a quarter, review your standing reports. Are they still answering the question they were built for? Are the populations still scoped correctly? Stale reports make stale decisions.


Anti-Patterns to Avoid

The mistakes we see most often:

  • Picking the report with the most filters and trying to make it answer everything. Generic reports with a hundred filters tend to produce numbers no one trusts. Build narrow reports that answer specific questions.
  • Confusing Completed with Completed-On-Time. A 95% completion rate sounds great until you realize half of it was completed past the due date. Completion and timeliness are different metrics.
  • Misunderstanding "Active" in context. "Active learners" can mean logged-in-this-month, has-an-open-assignment, or is-in-an-active-segment. The label is the same. The numbers are not.
  • Reporting on bad segments and trusting the numbers. A report is downstream of its population. If your Smart Segmentation rule is wrong, your report is wrong — no matter how clean the visualization.
  • Reading completion rate as effectiveness. Completion is a proxy. It does not tell you the learner can do the thing the program was meant to teach. Pair it with outcome metrics where you have them.
  • Comparing reports with different time windows. "Q3 completion rate" and "Last 90 days completion rate" sound interchangeable. They are not, especially around quarter boundaries.
  • Sharing reports without context. A number with no definition is a Rorschach test. Always include the question the report was built to answer, the population, and the metric definition.
  • Treating reports as static documents. A report built six months ago for a population that has since changed will mislead. Reports rot. Review and refresh.

Reporting Across the Continu Architecture

Reporting is not a layer on top of the platform. It is a different view of the same data the platform is already managing.

Reports read state from Assignments. Completion, overdue, in-progress — every assignment-state metric is a count or percentage of states across a defined population.

Reports read attendance from Workshops. Workshop reports have their own state machine — Registered, Waitlisted, Attended, No-Show, Cancelled — and the report metrics are derived from it.

Reports use Smart Segmentation for scope. The "who" of every meaningful report is a Smart Segmentation rule. Bad segments produce bad reports.

Reports are different from dashboards. A report is an answer to a specific question. A dashboard is a curated view of multiple answers, usually scoped to a role (manager, partner manager, program owner). Build reports for analysis. Build dashboards for monitoring.

Manager and partner-manager views are role-scoped reports. A manager dashboard is the same data as the global completion report — just scoped to that manager's team. The underlying logic is identical.

A well-designed reporting practice is sharp questions + clean Smart Segmentation + correct metric definitions + appropriate aggregation + a regular review cadence. The platform produces the numbers. Your job is to ask the right questions.


External Audience Patterns

External audiences typically need three reporting lenses:

Partner certification reporting. Map certification status across the partner population by tier and region. Use Track Progress Reports filtered to the certification Track. Pair with Workshop Attendance Reports for live-component certifications. Channel managers should have role-scoped versions of these reports for their own books of business.

Customer education reporting. Map onboarding completion and engagement across the customer admin population, scoped by lifecycle stage (onboarding, adoption, renewal). Coordinate with the customer success team — your reports should align with their lifecycle definitions, or your two teams will report different numbers for the same accounts.

Channel program reporting. Channel programs typically need multi-tier rollups (master distributor → reseller → end-customer). Build reports that respect the hierarchy and let the channel team see their own scope without being overwhelmed by the full data set.

Franchise operations reporting. Franchise programs typically need compliance-style reporting — proof that operators completed required training within required windows. The Assignment History Report is your friend here.

A note on data hygiene for external reporting. Every external report is downstream of a system you don't control (PRM, CRM, partner portal, franchise system). Audit the underlying segments quarterly. What was true on day one is not always true on day ninety.


Internal Audience Patterns

Internal reports tend to be more straightforward because the HRIS is usually a single source of truth.

  • Compliance dashboards — Assignment Status Reports scoped to regulated populations, sorted by overdue.
  • New hire onboarding completion — Assignment Status Reports scoped to "hires within last 90 days," grouped by department or function.
  • Manager rollups — role-scoped reports showing direct-report completion and attendance.
  • Department-level capability — completion and certification metrics scoped by department, useful for L&D budget conversations.
  • Annual review prep — historical reports showing learning completed and certifications earned over the year, scoped per employee.

Internal reporting is more forgiving than external because the underlying data is cleaner. Spend less time on defense, more time on the business question.


Known Behaviors and Limits

A few things worth knowing in advance:

  • Time filters apply to events, not to assignment creation. A "completed in last 30 days" filter looks at the timestamp of the state change, not when the assignment was created.
  • Removed assignments persist in reporting history. A removed assignment still shows up in historical reports. This is usually intentional — it preserves the audit trail — but worth knowing if your numbers look ghostly.
  • SCORM completion can lag. SCORM packages may report completion seconds, minutes, or in rare cases hours after the learner finishes. If a report doesn't show a completion you expected, refresh after a few minutes.
  • Workshop attendance may sync delayed. Zoom, Teams, and Google Meet integrations sync attendance after the session ends. Allow time for the state to finalize before pulling the report.
  • Bulk reports may take time to generate. A report scoped to 50,000 users with a complex aggregation does not render instantly. Plan for processing time on large rollups.
  • Permissions affect what shows up. Different roles see different data. A manager sees their team. An admin sees everything. A partner manager sees their book of business. If a report looks empty, check the role of the person running it.
  • Date and time zones can cause confusion. Different reports may interpret time filters in UTC, in the user's local time, or in the admin's time zone. Confirm the time zone treatment for any high-stakes report.
  • Custom fields take time to populate after segment changes. When you add a new attribute and reorganize segments, reports may take a brief period to reflect the new state. Sanity-check before reading.

Where to Go Next

Suggested next reads:

  • How Continu Works — the foundational architecture article
  • Smart Segmentation: Designing Populations That Maintain Themselves
  • Designing Assignments: Direct vs. Automated
  • Workshop Strategy: When and How to Use Live Learning
  • Automation Design Best Practices

If you take only one thing from this guide, take this:

Define the question first. Pick the report second. Understand the metric. Then read the number. The number that survives that sequence is the only number worth acting on.

Ask sharper questions. Build cleaner segments. Trust the numbers that earn it. Ignore the ones that don't.

Was this article helpful?
0 out of 0 found this helpful