Setting up Grading Criteria for Video Assessments

How to decide between a single score and a rubric — and how to design rubric criteria that make grading consistent.


Video and screen-recording assessments capture skill demonstrations — pitches, walkthroughs, scenario responses. Unlike text assessments, there's no single right answer. A human grader has to decide whether what they saw meets the standard, and you need to define what the standard is.

Continu gives you two ways to define that standard: one overall score, or a rubric. The choice affects grading consistency across graders, the actionability of feedback for learners, and how a score holds up when it's questioned.

For the wider frame on when a video assessment is the right format, see Assessments: Designing Knowledge Checks That Earn Their Cost.


The Two Options

Option One: Question Point Value. One overall score for the entire assessment. The grader watches the video and assigns a single number.

Used for low-stakes practice, quick capability checks, and assessments where directional feedback is sufficient ("they got it" or "they didn't"). Not the right fit when you need defensible scoring, multi-grader consistency, or learner-facing feedback that points to specific skills.

Question Point Value option for video assessment grading

Option Two: Grading Criteria (Rubric). Multiple criteria, each scored individually, with optional weighting. The grader scores each criterion and the platform combines them.

Used for any assessment where multiple skills are being demonstrated at once — a sales pitch (opening, discovery, value prop, close), a customer-service scenario (empathy, accuracy, resolution, escalation handling), a technical walkthrough (clarity, depth, accuracy, time). Also the right choice anytime grading consistency across graders matters.

Grading Criteria rubric setup

Each video assessment can have its own rubric — they're set per-assessment, not org-wide. Minimum two criteria, maximum fifteen.


Designing a Good Rubric

The settings panel will let you add criteria, but it won't tell you whether they're useful. A few principles:

Criteria should be observable, not inferred. "Demonstrates empathy" is hard to score consistently. "Restates the customer's concern in their own words" is observable — either it happened or it didn't. Aim for the second version.

Three to five criteria is the sweet spot. Fewer than three and a rubric isn't doing much work over a single score. More than seven and graders tend to fatigue on the long list, which reduces scoring quality. The cap is fifteen, but most rubrics work better with fewer.

Weight criteria based on what matters to the outcome. If "tone" and "accuracy" both get equal weight on a technical assessment, you're saying they're equally important. Often they aren't. Weight the criteria closest to the actual job outcome highest.

Use short, concrete criterion names. Long, abstract names slow graders down. Short, concrete names ("Identified the root cause," "Asked clarifying questions") let graders score quickly and consistently.


Configuration Pitfalls

Subjective Criterion Language. "Was engaging," "had good energy," "showed leadership" — these produce inconsistent scores across graders. Replace with observable behaviors before publishing.

Too Many Criteria. Twelve criteria might feel thorough but spreads grader attention thin and extends the grading time per submission. Three to five focused criteria typically work better than a long checklist.

Equal Weighting by Default. If you don't set weights deliberately, every criterion contributes the same percentage. That's often not the intent — accuracy on a compliance pitch typically matters more than tone. Set weights intentionally.

Rubric Without Grader Calibration. Even a well-designed rubric produces inconsistent scores across graders who haven't seen the rubric applied to sample responses. Run a 15-minute calibration where graders score the same 2–3 sample videos, then discuss the differences before going live.

Rubric That No Longer Fits the Skill. The skill you wrote the rubric against may not match what the program is actually developing now. Review the rubric with someone close to the work before publishing.


Where This Fits

You're here because you're setting up the grading side of a video or screen-recording assessment. The rubric is what makes grading consistent across eligible graders. Combine a well-designed rubric with the grader decisions in Assessment Grader Settings — together they shape grading quality.


See Also


A rubric works when its criteria are observable, weighted intentionally, and calibrated across graders.

Was this article helpful?
0 out of 1 found this helpful