How to tag content so the right people find it. This is the practical guide — what tags do, how to choose them, and the patterns that keep a tag library useful instead of bloated.

What tags do

A tag is a label attached to a piece of content (Article, Video, Workshop, Track, Journey, Assessment). Tags drive three things in Continu:

  1. Discoverability in Explore. Tags surface as filters and topics so learners can browse by subject without knowing what to search.
  2. Search ranking. When a learner searches, tag matches weight heavily — content tagged with the search term ranks above content that only mentions it in the body.
  3. Recommendations. Eddy and the recommendation engine use tags as one of the strongest signals for what to surface next.

What tags do not do: tags do not control who can see content (visibility is set separately on each item), and tags are not how Smart Segmentation works. Smart Segmentation operates on user attributes — Department, Location, Role, custom fields — to define dynamic groups of users. Tags live on content; Smart Segmentation lives on users. They are different layers.

Where tags live in Continu

Two different things are called “tags” in Continu. They are not interchangeable.

Content tags

These are the everyday tags applied to a piece of content. Set them in the editor when you create or edit any content type:

  • Articles — Tags field in the article editor sidebar
  • Videos — Tags field in the video editor sidebar
  • Workshops — Tags field in Workshop settings
  • Tracks and Journeys — Tags field in the Track or Journey settings
  • Assessments — Tags field in the Assessment settings

A piece of content can carry as many content tags as you assign. The practical sweet spot is three to seven.

Skills (capability tags)

Skills are a separate concept. They are capability tags attached to user profiles, not to content. A user has the Skill “Forklift Operator”; an Article does not. Skills power competency tracking, Skill-gap reporting, and certain assignment patterns.

Don’t try to use one type to do the other type’s job. If you find yourself adding the same tag to every piece of content and every user, you have a Skill — model it as a Skill.

How to add a tag

  1. Open the content item in the editor (Article, Video, Workshop, Track, Journey, or Assessment)
  2. In the right-hand settings panel, find the Tags field
  3. Type a tag name. If a matching tag already exists, it appears in the dropdown — select it. If nothing matches, press Enter to create a new tag.
  4. Repeat until the item has the tags it needs
  5. Save

New tags are created instantly and immediately available across the org. Anyone with content editing rights can create a new tag. This is what makes a tag library helpful and also what makes it sprawl, which is why the governance section below matters.

Let AI suggest tags for you

Continu can suggest tags automatically based on a content item’s title, description, and body. This is a fast way to seed a piece of content with relevant tags pulled from your existing library — especially useful when an author is unfamiliar with the taxonomy or when you’re back-tagging older content in bulk.

How to use AI tag suggestions:

  1. Open the content item in the editor
  2. In the Tags field, trigger the suggestion (the exact control depends on your Continu version — look for a wand, sparkle, or “Suggest tags” affordance on the Tags field, or ask Eddy)
  3. Continu analyzes the content and proposes a set of tags drawn from your existing library
  4. Review each suggestion. Accept the ones that fit, decline the ones that don’t.
  5. Add any additional tags by hand

Use AI suggestions to save time, not to replace taxonomy judgment. A few things worth keeping in mind:

  • AI suggestions reflect the content, not your organization’s vocabulary. If your convention is manager-essentials and the AI proposes Manager Essentials, decline the suggestion and apply the correct form.
  • Suggestions can over-tag. The model is biased toward finding more matches; it does not know your three-to-seven sweet spot.
  • AI does not know what tags depend on what surfaces. If a tag has an Explore page or a collection riding on it, only the taxonomy owner should change that tag — AI shouldn’t silently remove or rename it.
  • Review the suggested tags against your canonical taxonomy list before saving.

The pattern that works well: an author drafts content, runs AI tag suggestions to get a starting set, reviews against the taxonomy, and then publishes. The taxonomy owner spot-checks new content during the quarterly audit.

Best practices

Decide your tag taxonomy before you start adding content

The single highest-leverage thing you can do is define your tag vocabulary upfront, before content starts piling up. Spend an hour with your CS lead and your content owners. List out:

  • The topics your content covers — for example, sales, customer-service, product-knowledge, compliance
  • The audiences content is for — for example, new-hire, manager, individual-contributor
  • The format / use if it matters to you — for example, reference, quickstart, policy, how-to
  • The lifecycle / status if you track it — for example, evergreen, seasonal, deprecating

Write the taxonomy down. Share it with everyone who can edit content. Revisit it once a quarter.

Use a consistent naming convention

Pick one format and stick to it across the whole library. Recommended:

  • Lowercase with hyphens for multi-word tags: manager-essentials, not Manager Essentials or manager_essentials
  • Singular nouns rather than plural: policy, not policies
  • No prefixes unless they’re load-bearing: sales is better than topic-sales

Inconsistency creates duplicate tags that Continu treats as separate. Manager Essentials, manager-essentials, and Manager-Essentials are three different tags. Search and Explore will only match what they’re given, so the same content ends up split across three “tags” that should be one.

Tag for the searcher, not the author

When you’re picking tags, ask: what would a learner type or click if they needed this content? Use those words. Internal codenames, project names, and editorial categorizations are bad tags because nobody outside the team searches for them.

If your customer-facing term for a process is “onboarding,” the tag is onboarding, even if your internal name for the process is “Project Wavelength.”

Pair one broad topic tag with one or two specific tags

A useful pattern: every piece of content carries one broad topic tag and one or two narrow ones.

  • Broad: sales, compliance, product-knowledge, leadership
  • Specific: discovery-calls, gdpr, feature-x-launch, coaching-conversations

This way the same content surfaces both when someone browses the topic (sales) and when someone searches the specific concept (discovery-calls).

Don’t tag the obvious format

If the content type is already a Video, don’t tag it video. If a Workshop is already a Workshop, don’t tag it workshop. Format is already structured data — tags should add information the system doesn’t already have.

The exception: format tags can be useful when they signal something the content type doesn’t, like microlearning for a short Video versus a long one, or live-session for a Workshop that requires attendance versus an on-demand recording.

Keep the count between three and seven

Fewer than three and the content is hard to find unless someone already knows it exists. More than seven and you’re tagging for completeness rather than for findability — and tag-stuffed content trains the search ranker that the tags are noise, which weakens every tag’s signal.

If you need more than seven, you probably have two pieces of content stitched together. Consider splitting it.

Audit the tag library quarterly

Tags accumulate. Once a quarter, an admin should:

  1. Pull the list of all tags in use (Admin > Admin Utilities > Tags)
  2. Identify duplicates (different casing, plurals, typos) and merge them — pick the canonical form, retag the content that uses the wrong form, then delete the orphan tag
  3. Identify single-use tags — tags applied to exactly one piece of content. These are usually mistakes, abandoned experiments, or candidates for a broader existing tag.
  4. Identify tags that no longer match what the org talks about (renamed teams, sunset products, deprecated processes) and either rename them or sunset the tag with the content.

A 90-minute audit per quarter keeps the library tight. Skipping it for a year costs a multi-day cleanup.

Anti-patterns to avoid

These are the tagging habits that consistently break search and Explore. Watch for them.

Personal tags

john-favorites, for-katie, our-team-content. These are bookmarks, not tags. Use Groups or assignments to give specific people access to specific content.

Date-stamped tags

q3-2024, fy25-kickoff, january-launch. These age out and become noise. If a piece of content is genuinely tied to a date, set its date metadata; don’t encode it as a tag.

Mood / editorial tags

important, must-read, critical. Every author thinks their own content is important. The tag carries no signal.

Verb tags

learn-about-x, how-to-do-y. Tags should be nouns. The content type and title carry the verb.

Synonym sprawl

workshop, class, session, live-training, training. Pick one term, use it consistently. Synonyms split the signal across tags that should be one.

Tag-as-folder

team-engineering-onboarding-week-one. This is a folder path masquerading as a tag. Decompose: engineering, onboarding, week-one.

Tags and search

The Continu search engine treats tag matches as a strong signal. Specifically:

  • An exact tag match ranks above a title match for that term
  • An exact tag match ranks far above a body-text match
  • A partial tag match (the search term appears inside a longer tag) ranks similar to a title match

This is why getting the tag string right matters more than getting the title perfect. If your most important manager content carries the tag manager-essentials, every search for that phrase will surface it first.

If a piece of content isn’t ranking where you’d expect, the first thing to check is whether the tag is applied and spelled the way searchers would type it.

Governance

For a tag library to stay useful at scale, three things need a clear owner:

Taxonomy owner

One person (usually a CS lead or content operations lead) owns the canonical tag list. They make the call when there’s ambiguity (“is this coaching or feedback?”) and they run the quarterly audit.

Content authors

Everyone with editing rights can create new tags in practice. Best practice is that they default to existing tags from the canonical list and only create new tags when nothing fits — and when they do, they let the taxonomy owner know.

Owners of tag-dependent surfaces

Anyone building an Explore page, in-app collection, or recommendation surface that depends on a specific tag needs to know what tag they’re depending on. Document the dependency somewhere central so the taxonomy owner doesn’t break things during a cleanup audit.

Quick reference

Do Don’t
Lowercase, hyphenated, singular Title Case With Spaces or plurals
Tag for the searcher Tag for the author
3–7 tags per content item 0–2 (invisible) or 10+ (noise)
One broad + one or two specific All broad, all specific, or all the same level
Nouns Verbs
Audit the library quarterly Let tags accumulate for years
Use AI suggestions, then review Accept AI suggestions without checking the taxonomy
Document where each tag is depended on Hard-code a tag into a surface without recording it

Related guides

Was this article helpful?
0 out of 0 found this helpful