AI EvaluationAI Evaluation Complete Overview And Setup Guide

AI Evaluation Complete Overview and Setup Guide

Overview of the full AI Evaluation setup, including factors, question mapping, scoring labels, calibration, and automatic scoring.

What is AI Evaluation?

AI Evaluation automatically scores candidate responses using artificial intelligence. It combines evaluation factors, question mapping, scoring labels, and automatic evaluation to produce instant, consistent assessment, candidate rankings, feedback, and scorecards.

Use it when you want a structured interview review process that follows the same criteria across candidates. The setup flow defines what to evaluate, how each question maps to those criteria, and how results should be scored.

AI Evaluation Complete Process

Follow the full setup flow in order so each part of AI Evaluation has the information it needs.

1. Configure Position Level and Strictness Level

Set the position level and strictness level first so AI evaluation is calibrated for the role before you add the scoring setup.

Use these settings to align the evaluation with the role seniority and the level of judgment you want applied during scoring.

2. Create Evaluation Factors

Create the factors you want AI to use when assessing candidate responses. Factors define the criteria behind the evaluation.

Start in the job settings, then open Settings > Customisation > AI Evaluation > Add Factors.

Map each evaluation factor to the relevant interview question so AI knows which criteria apply to which response.

After you create the factors, open Settings > Customisation > AI Evaluation > Question Mapping.

4. Set Up Scoring Labels

Define the performance labels that AI uses when turning evaluation output into readable scores.

In the same AI Evaluation area, open Scoring Labels and add the labels you want to use.

5. Enable Automatic AI Evaluation

Turn on automatic evaluation so the system scores candidate responses after the interview is complete.

Automatic AI Evaluation uses your factors, question mapping, and scoring labels to generate scores and feedback without manual review.

6. Review and Monitor Results

Review the generated scores, feedback, candidate rankings, and scorecards to confirm the evaluation matches your hiring criteria.

Use the results to compare candidates consistently and adjust your setup if the scoring does not reflect the expectations for the role.

AI Evaluation Key Components

AI Evaluation works best when each setup piece has a clear role.

Evaluation factors

Evaluation factors define the criteria AI uses when judging candidate answers. They are the foundation of the scoring model.

Factor-to-question mapping

Question mapping connects each factor to the questions it applies to. This keeps scoring relevant to the content of each response.

Scoring labels

Scoring labels convert evaluation output into readable performance categories. They make results easier to interpret across candidates and reviewers.

Position level and strictness level calibration

Position level and strictness level help tune evaluation to the role. They control how the system interprets responses for different job requirements.

Automatic scoring

Automatic scoring runs after the interview is complete and returns scores without manual input.

Scorecards

Scorecards present the final evaluation output in a format that is easier to review, compare, and share.

Complete setup navigation

Use this workflow to move through the setup in the right order.

Create evaluation factors

Open the AI Evaluation area and define the criteria you want AI to assess.

Link factors to questions

Map each factor to the interview questions it should evaluate.

Set up scoring labels

Add the labels that describe performance levels in your evaluation output.

Configure calibration settings

Adjust position level and strictness level for the job.

Enable automatic evaluation

Turn on automatic AI Evaluation for the job.

Review results

Check scores, feedback, candidate rankings, and scorecards after interviews complete.

Best practices for complete AI evaluation setup

  • Define factors before you map questions so the criteria stay consistent.
  • Keep scoring labels clear and limited so reviewers can read results quickly.
  • Calibrate strictness to match the seniority and expectations of the role.
  • Review scorecards after setup to confirm the output matches your hiring process.
  • Revisit factor mapping when you change interview questions or job requirements.

Individual component pages

Each of these pages covers one part of the AI Evaluation workflow in more detail.

Result

When the full setup is in place, AI Evaluation scores candidate responses consistently and gives you structured results you can review with confidence.