How to design rules that scale partner enablement, customer education, certification renewals, and onboarding programs across years of organizational change — without firing ten thousand assignments to the wrong audience.


Why Automations Exist

Manual administration scales to about a hundred learners. After that, it breaks.

A new partner activates on Tuesday. Someone has to remember to assign them onboarding. Tier 1 customers should get advanced training; that's a recurring list someone has to maintain. Certifications expire on rolling schedules; without a system, half of them expire silently. New hires arrive every Monday; one Monday, an admin goes on vacation, and a cohort starts without enablement.

Automations exist because organizations change faster than humans can keep up.

An automation is a rule that turns an event in your business — a hire, an activation, a contract sign, a tier change, a date — into a learning action, automatically, every time, without an admin in the loop.

Done well, automations turn one admin into a system that runs for years.

Done poorly, they fire ten thousand assignments to the wrong audience and require three days to clean up.

This guide is about doing them well.


What an Automation Actually Is

In Continu, an automation is a rule with three components:

  • A trigger. The event that causes the rule to evaluate. Most commonly, a change to a user's data — a new user added, an attribute updated, a certification expired.
  • A condition. The filter that determines whether the trigger applies. In practice, this is almost always a Smart Segmentation rule that defines the population the automation should serve.
  • An action. What Continu does in response. Typically the creation of an assignment, but automations can also remove assignments, send notifications, or update user attributes.

When the trigger fires and the condition matches, the action runs. That is the entire mental model.

Get the trigger right. Get the segmentation right. Choose the action deliberately. The platform does the rest.


Trigger Types and When to Use Each

The trigger is the most consequential decision in the rule. Pick the wrong one and the automation either misses people who should be served or hits people who should not.

User Added. Fires once, when a user first enters the segment that the automation watches. Use it when the program is meant to greet a learner — onboarding, activation, first-time qualification. It will not re-fire for the same user later, even if that user temporarily leaves the segment and returns.

User Updated. Fires whenever an attribute change moves a user into a segment they did not previously match. Use it when the program is keyed to a transition — promotion, tier change, role change, certification status change. The same user can trigger this automation multiple times if their attributes move them in and out of the segment.

The User Added vs. User Updated distinction is the most common source of automation bugs. If you mean "when this person first qualifies, give them this program," use User Added. If you mean "every time this person's situation changes and they qualify again, give them this program," use User Updated. The two are not interchangeable.

Event-based. Fires on specific platform or integration events — a workshop completing, a Track ending, a course failing, a content piece being added. Use it for follow-on programs that depend on a prior outcome.

The strategic question: what is the real-world moment the automation should respond to? Pick the trigger that maps cleanly to that moment, not the trigger that's easiest to configure.


The Anatomy of a Well-Designed Automation

Every well-designed automation has six attributes:

  • A clear trigger that maps to a real event in the business.
  • A sharp segmentation rule that names the population precisely — neither too broad nor too narrow.
  • A single, well-defined action that the rule produces.
  • An idempotent design — if the trigger fires twice for the same user, the result should be predictable, not chaotic.
  • A documented owner so future-you and your team know who to ask when the rule needs changing.
  • A version label so you can compare and roll back without breaking active programs.

Automations that are missing any of these six tend to drift, produce surprise outcomes, or quietly break programs that were running fine yesterday.


Best Practices

Principles worth internalizing before you activate a single rule:

Build segmentation first, automations second. An automation is only as good as the population it targets. If your Smart Segmentation rule is wrong, no amount of automation cleverness will save you. Get the segment right. Confirm it produces the population you expect. Then build the automation on top.

Design for idempotency. Idempotency means a trigger firing twice produces the same result as firing once. In practice, this means thinking about: what happens if the same user triggers this rule multiple times? Will they receive duplicate assignments? Will reports show double counts? Will the learner get notification fatigue? Build the rule so the answer is "no" before activating.

Use clear naming conventions. A name like "New Partner Onboarding — Tier 2 — North America — v3" tells you the trigger, the audience, the geography, and the version at a glance. A name like "Test 4" tells you nothing. Future-you and your team will need to find this rule a year from now. Make that easy.

Version, don't overwrite. When you change the conditions of an active automation, version it. Keep the old rule disabled and tagged so you can roll back. Compare the populations the two versions produce before retiring the original. Quiet breakage hides in undocumented edits.

Plan for retroactive evaluation. When you activate a new automation, the platform may evaluate it against your existing user base and fire the action for everyone who already matches the segment. This is sometimes what you want; sometimes it surprises an audience that did not expect the program. Always confirm before activating.

Limit one automation per program goal. It is tempting to bundle multiple actions into a single rule. Don't. One automation, one outcome. If the program needs three things to happen, build three rules and chain them deliberately. Bundled rules become unmaintainable.

Set a review cadence. Automations rot. Source data shifts, segments drift, content gets updated, programs retire. Review every active automation at least annually. Confirm it still produces the right population, still produces the right outcome, and still has an owner.


The Pre-Activation Checklist

Before you flip an automation from inactive to active, walk through this checklist:

  • Have I confirmed the segmentation produces the population I expect? If no, fix the segment first.
  • Do I know what happens if this rule fires twice for the same user? If no, design idempotency.
  • Do I know what happens if a user moves out of the segment after the automation has run? Plan offboarding deliberately.
  • Is the content the rule will assign actually finalized? Drafts in production teach learners that "draft" doesn't matter.
  • Are the downstream notifications appropriate for the audience? Especially for external audiences who may not tolerate excessive nudges.
  • Have I documented the rule's owner and purpose? If you got hit by a bus tomorrow, would your team know what to do with this rule?
  • Have I considered retroactive impact? When I activate this, who currently matches the segment, and is that intended?
  • Is the rule named so a stranger can find it in six months? Naming is documentation.
  • Have I scheduled a review date? Six months, twelve months — pick one and put it on the calendar.

A rule that passes all ten of these is ready to ship. A rule that fails any of them is not.


Anti-Patterns to Avoid

The mistakes we see most often:

  • Stacking too many actions on one trigger. When one rule creates an assignment, sends three notifications, removes a different assignment, and updates a custom field, debugging becomes nightmarish. One outcome per rule.
  • Choosing User Updated when User Added is right. A new-hire onboarding rule built on User Updated will re-fire every time the employee changes department for the rest of their career.
  • Choosing User Added when User Updated is right. A tier-change capability path built on User Added will only ever fire once per user — meaning a partner who moves Tier 1 → Tier 2 → Tier 3 only gets the first program.
  • No idempotency consideration. Rules that fire twice and create duplicate assignments are common, embarrassing, and a leading cause of "why did everyone get this assignment again" tickets.
  • Hard-coding values that change. A rule that filters on "Plan equals Pro" breaks the day Marketing renames the plan to "Professional." Use stable identifiers when you can.
  • Automations without owners. Orphan automations accumulate over years. They fire reliably, but no one knows what they were built for or whether they are still correct.
  • Automations that outlive their programs. A rule built for a 2024 launch that no one remembers to disable will fire for new users in 2027. Set retirement dates.
  • Skipping the test step. "I'll just activate it and watch it" is how ten thousand wrong assignments happen. Test segments are cheap. Misfires are expensive.
  • Treating activation as final. Activation is the start of the rule's life, not the end. Review, revise, retire on a cadence.

Common Automation Patterns

The patterns you'll reach for again and again:

New partner activation. Trigger: User Added when partner contact is activated in PRM. Condition: Smart Segmentation by tier and region. Action: assign the appropriate Reseller Onboarding Track.

Customer onboarding kickoff. Trigger: User Added when customer admin first appears in the segment "Customers in onboarding." Action: assign the customer education kickoff Track and register the admin for the next available kickoff workshop.

Certification renewal sequence. Three rules chained on date-based triggers — 60 days before expiry, send the renewal Track; 30 days before, send a reminder; 7 days before, escalate to the partner manager. All scoped by Smart Segmentation to the right tier and region.

Role-change capability path. Trigger: User Updated when role attribute changes. Condition: Smart Segmentation by new role. Action: assign the role-specific Track. Pair with an offboarding rule that retires the old role's Track if the change should remove the prior program.

Recurring compliance. Trigger: date-based, annually. Condition: Smart Segmentation by regulated population. Action: assign the compliance Track with a fixed due date.

Re-engagement nudges. Trigger: User Updated when "last activity" attribute crosses a threshold (e.g., 30 days inactive). Condition: Smart Segmentation by program enrollment. Action: send a re-engagement notification, optionally manager-cc'd.

Cohort enrollment. Trigger: User Added when a learner enters a "Cohort October 2025" segment. Action: assign the cohort Track and register for cohort workshops. Retires when the cohort completes.

Offboarding cleanup. Trigger: User Updated when status moves to inactive or terminated. Action: remove active assignments and revoke content access. Often paired with a manager notification.


Automations in the Continu Architecture

Automations are the engine that connects every other object in the platform.

Smart Segmentation defines who the rule applies to. Without good segmentation underneath, the automation either misses people or hits the wrong ones.

Assignments are the most common output. Most automations exist to create the right assignment for the right person at the right moment. Direct assignment is fine for one-off pushes; automation is what makes recurring programs possible.

Notifications are often paired with automations. A new assignment notification, a workshop registration confirmation, a renewal reminder — many of these are produced by automation rules.

Reporting reads the state changes that automations produce. When a rule creates an assignment, that assignment shows up in reports. When a rule removes one, the report reflects the removal. Automations write the data that reports interpret.

Content sits underneath the action. Every rule that creates an assignment is pointing at a piece of content. The cleaner the content library, the cleaner the automation outputs.

A well-designed automation is a clear trigger + a sharp Smart Segmentation rule + a single deliberate action + idempotent design + a clear owner + a clear name + a documented purpose + a review cadence. The platform handles the firing. Your job is to design the rule.


External Audience Patterns

External audiences are where automation pays the biggest dividends — because the audience is large, fast-changing, and not in your direct control.

Partner programs. The most reliable trigger source is your PRM. Build automations on PRM-driven user creation, partner activation events, and tier-change events. Always confirm the PRM's data quality before designing rules — your automation is only as accurate as the source.

Customer education. Build automations on CRM-driven events: customer signs, customer plan changes, customer admin user added. Coordinate with the customer success team — your automation triggers should align with their lifecycle definitions, or your reports will tell different stories.

Channel certification. Build automations on date-driven and tier-change triggers. Channel programs often have multi-tier hierarchies (master distributor → reseller → end-customer), and automations should respect the hierarchy — content and timing flow differently at each level.

Franchise programs. Build automations on franchise-system-of-record events: ownership changes, location openings, regional rollouts. Franchise data tends to change less frequently than partner data, but the changes are higher-stakes (a new operator running a location).

A note on data hygiene for external automations. Every external automation depends on a system you do not control. Build defensively: add fallback conditions, watch for null values, audit segment populations on a recurring cadence. What is true on day one is not always true on day ninety.


Internal Audience Patterns

Internal automations are easier because the HRIS is usually a single source of truth.

  • HRIS-driven triggers — hire date, role change, manager change, location change, employment status change. The HRIS is your most reliable data source.
  • Manager-direct-report triggers — fires when an employee gets their first direct report (assigning manager training).
  • Compliance recurrence — date-based, annually or per regulatory cycle.
  • Reorg-driven triggers — fires on department or organizational unit change. Useful but high-volume during reorg events; consider rate-limiting if your rules fan out.

Internal automations carry less data risk than external ones, but the principles are identical. Test, name, version, document.


Known Behaviors and Limits

A few things worth knowing in advance:

  • Trigger evaluation timing varies by trigger type. User Added and User Updated are typically near-real-time. Date-based triggers fire on the schedule you set. Event-based triggers depend on the source event's latency.
  • Activating an automation evaluates it against existing users. A new "User Added" rule may fire retroactively for everyone who currently matches the segment. Always test before activating in production.
  • Simultaneous triggers can produce ordering surprises. If two rules both fire on the same user at the same moment, the order they execute may not be deterministic. Avoid building rules that depend on a specific ordering.
  • Bulk actions take time to process. A rule that creates assignments for 10,000 users does not complete instantly. Plan rollouts accordingly.
  • Disabled automations stop firing for new users but do not retroactively undo prior actions. If you need to undo, you have to remove the affected assignments manually.

Where to Go Next

Suggested next reads:

  • How Continu Works — the foundational architecture article
  • Smart Segmentation: Designing Populations That Maintain Themselves
  • Designing Assignments: Direct vs. Automated
  • Workshop Strategy: When and How to Use Live Learning
  • Reporting: Which Report Should I Use?

If you take only one thing from this guide, take this:

Automations are leverage. Build them like you'll come back in three years — because someone will. Test before activating. Name them so a stranger can find them. Version them so you can roll back. Review them so they don't rot.

Design the rule like the rule will outlive you. Because the good ones do.

Was this article helpful?
0 out of 0 found this helpful