Core Grading¶
Fundamental types for rubric-based evaluation: criteria, rubrics, verdicts, and evaluation reports.
Overview¶
The core grading module provides the foundational types for defining evaluation criteria and receiving grading results. A Rubric contains multiple Criterion objects, each with a weight and requirement. Grading produces an EvaluationReport with per-criterion verdicts and explanations.
Quick Example¶
from autorubric import Rubric, Criterion, CriterionVerdict, LLMConfig
from autorubric.graders import CriterionGrader
# Define criteria
rubric = Rubric([
Criterion(name="accuracy", weight=10.0, requirement="States the correct answer"),
Criterion(name="clarity", weight=5.0, requirement="Explains reasoning clearly"),
Criterion(weight=-15.0, requirement="Contains factual errors"), # name optional
])
# Or from dict/file
rubric = Rubric.from_dict([
{"weight": 10.0, "requirement": "States the correct answer"},
{"requirement": "Explains reasoning clearly"}, # weight defaults to 10.0
])
rubric = Rubric.from_file("rubric.yaml")
# Grade
grader = CriterionGrader(llm_config=LLMConfig(model="openai/gpt-4.1-mini"))
result = await rubric.grade(to_grade="...", grader=grader)
print(f"Score: {result.score:.2f}")
for cr in result.report:
print(f" [{cr.final_verdict}] {cr.criterion.requirement}")
Score Calculation¶
For each criterion \(i\):
- If verdict = MET, contribution = \(w_i\)
- If verdict = UNMET, contribution = 0
Final score:
Criterion¶
A single evaluation criterion with weight and requirement.
Criterion
¶
Bases: BaseModel
A single evaluation criterion with a weight and requirement description.
Supports both binary (MET/UNMET) and multi-choice criteria. If options is None,
the criterion is binary. If options is provided, the criterion is multi-choice.
| ATTRIBUTE | DESCRIPTION |
|---|---|
weight |
Scoring weight. Positive for desired traits, negative for errors/penalties. Defaults to 10.0 for uniform weighting when not specified.
TYPE:
|
requirement |
Description of what the criterion evaluates.
TYPE:
|
name |
Optional short identifier for the criterion (e.g., "clarity", "accuracy"). Useful for referencing criteria in reports and debugging.
TYPE:
|
options |
List of options for multi-choice criteria. If None, criterion is binary.
TYPE:
|
scale_type |
For multi-choice, indicates if options are ordinal (ordered) or nominal (unordered categories). Affects aggregation strategy selection.
TYPE:
|
aggregation |
Per-criterion aggregation strategy override. If None, uses grader default.
TYPE:
|
Example
Binary criterion (existing behavior)¶
binary = Criterion( ... name="accuracy", ... weight=10.0, ... requirement="The response is factually accurate" ... )
Multi-choice ordinal criterion¶
ordinal = Criterion( ... name="satisfaction", ... weight=10.0, ... requirement="How satisfied would you be?", ... options=[ ... CriterionOption(label="1", value=0.0), ... CriterionOption(label="2", value=0.33), ... CriterionOption(label="3", value=0.67), ... CriterionOption(label="4", value=1.0), ... ], ... scale_type="ordinal", ... )
get_option_value
¶
Get the score value for an option by index.
| PARAMETER | DESCRIPTION |
|---|---|
index
|
Zero-based index of the option.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
float
|
The score value for the option. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If this is a binary criterion or index is out of range. |
Source code in src/autorubric/types.py
find_option_by_label
¶
Find option index by label (case-insensitive, whitespace-normalized).
Used for resolving ground truth labels to indices for metrics computation.
| PARAMETER | DESCRIPTION |
|---|---|
label
|
The label to search for.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
int
|
Zero-based index of the matching option. |
| RAISES | DESCRIPTION |
|---|---|
ValueError
|
If this is a binary criterion or label not found. |
Source code in src/autorubric/types.py
validate_options
¶
validate_options() -> Criterion
Validate multi-choice options if present.
Source code in src/autorubric/types.py
CriterionVerdict¶
Enum representing the verdict for a criterion.
CriterionVerdict
¶
Bases: str, Enum
Status of a criterion evaluation.
- MET: The criterion is satisfied by the submission
- UNMET: The criterion is not satisfied by the submission
- CANNOT_ASSESS: Insufficient evidence to make a determination
CriterionReport¶
Per-criterion result with verdict and explanation.
CriterionReport
¶
Bases: Criterion
A criterion with its evaluation result.
Supports both binary (MET/UNMET/CANNOT_ASSESS) and multi-choice verdicts.
For binary criteria, use verdict. For multi-choice, use multi_choice_verdict.
| ATTRIBUTE | DESCRIPTION |
|---|---|
verdict |
Binary verdict (MET/UNMET/CANNOT_ASSESS). None for multi-choice criteria.
TYPE:
|
multi_choice_verdict |
Multi-choice verdict with selected option. None for binary.
TYPE:
|
reason |
Explanation for the verdict from the LLM judge.
TYPE:
|
score_value
property
¶
Get the score contribution (0-1) for this criterion.
For binary criteria: 1.0 if MET, 0.0 otherwise. For multi-choice: the value of the selected option.
is_na
property
¶
Check if this criterion was marked NA or CANNOT_ASSESS.
Returns True for: - Binary criteria with CANNOT_ASSESS verdict - Multi-choice criteria with NA option selected
CriterionJudgment¶
Structured output from LLM judge for a single criterion.
CriterionJudgment
¶
Bases: BaseModel
Structured LLM output for single criterion evaluation.
Used with LiteLLM's response_format parameter to ensure type-safe, validated responses from the judge LLM.
Note: This is separate from CriterionReport because: - CriterionReport includes 'weight' and 'requirement' fields that come from the rubric, not from the LLM - The LLM only outputs the judgment (status + explanation)
Rubric¶
Collection of criteria for evaluation.
Rubric
¶
Rubric(rubric: list[Criterion])
A rubric is a list of criteria used to evaluate text outputs.
Each criterion has a weight and requirement. Use the grade() method to evaluate text against this rubric using a grader.
Source code in src/autorubric/rubric.py
grade
async
¶
grade(to_grade: ToGradeInput, grader: Grader, query: str | None = None, reference_submission: str | None = None) -> EvaluationReport
Grade text against this rubric using a grader.
| PARAMETER | DESCRIPTION |
|---|---|
to_grade
|
The text to evaluate. Can be either:
- A string (optionally with
TYPE:
|
grader
|
The grader to use. REQUIRED - must be provided. Configure length_penalty and normalize on the grader if needed.
TYPE:
|
query
|
Optional input/query that prompted the response.
TYPE:
|
reference_submission
|
Optional exemplar response for grading context. When present, provides calibration for the grader.
TYPE:
|
| RAISES | DESCRIPTION |
|---|---|
TypeError
|
If grader is not provided. |
Source code in src/autorubric/rubric.py
validate_and_create_criteria
staticmethod
¶
validate_and_create_criteria(data: list[dict[str, Any]] | dict[str, Any]) -> list[Criterion]
Validate and create Criterion objects from raw data.
Supports multiple formats: - Flat list of criteria - List of sections with criteria - Dict with 'sections' key containing list of sections - Dict with 'rubric' key containing sections
Source code in src/autorubric/rubric.py
from_yaml
classmethod
¶
from_yaml(yaml_string: str) -> Rubric
Parse rubric from a YAML string.
Source code in src/autorubric/rubric.py
from_json
classmethod
¶
from_json(json_string: str) -> Rubric
Parse rubric from a JSON string.
Source code in src/autorubric/rubric.py
from_file
classmethod
¶
from_file(source: str | Any) -> Rubric
Load rubric from a file path or file-like object, auto-detecting format.
Source code in src/autorubric/rubric.py
161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 | |
EvaluationReport¶
Complete grading result with score and per-criterion reports.
EvaluationReport
¶
Bases: BaseModel
Final evaluation result with score and per-criterion reports.
For training use cases, set normalize=False in the grader to get raw weighted sums instead of normalized 0-1 scores.
| ATTRIBUTE | DESCRIPTION |
|---|---|
score |
The final score (0-1 if normalized, raw weighted sum otherwise).
TYPE:
|
raw_score |
The unnormalized weighted sum.
TYPE:
|
llm_raw_score |
The original score returned by the LLM (same as raw_score).
TYPE:
|
report |
Per-criterion breakdown with verdicts and explanations.
TYPE:
|
cannot_assess_count |
Number of criteria with CANNOT_ASSESS verdict.
TYPE:
|
error |
Optional error message if grading failed (e.g., JSON parse error). When set, score defaults to 0.0. Training pipelines should filter these out.
TYPE:
|
token_usage |
Aggregated token usage across all LLM calls made during grading. For CriterionGrader, this is the sum across all criterion evaluations.
TYPE:
|
completion_cost |
Total cost in USD for all LLM calls made during grading. Calculated using LiteLLM's completion_cost() function.
TYPE:
|
Example
result = await rubric.grade(to_grade=response, grader=grader) print(f"Score: {result.score:.2f}") if result.cannot_assess_count: ... print(f"Could not assess {result.cannot_assess_count} criteria") if result.token_usage: ... print(f"Tokens: {result.token_usage.total_tokens}") if result.completion_cost: ... print(f"Cost: ${result.completion_cost:.6f}")
TokenUsage¶
Token usage tracking for LLM calls.
TokenUsage
dataclass
¶
TokenUsage(prompt_tokens: int = 0, completion_tokens: int = 0, total_tokens: int = 0, cache_creation_input_tokens: int = 0, cache_read_input_tokens: int = 0)
Token usage statistics from LLM API calls.
| ATTRIBUTE | DESCRIPTION |
|---|---|
prompt_tokens |
Number of tokens in the prompt/input.
TYPE:
|
completion_tokens |
Number of tokens in the completion/output.
TYPE:
|
total_tokens |
Total tokens (prompt + completion).
TYPE:
|
cache_creation_input_tokens |
Tokens used to create cache entries (Anthropic).
TYPE:
|
cache_read_input_tokens |
Tokens read from cache (Anthropic).
TYPE:
|
Example
usage = TokenUsage(prompt_tokens=100, completion_tokens=50, total_tokens=150) print(f"Total tokens: {usage.total_tokens}") Total tokens: 150
ToGradeInput¶
Type alias for the input format accepted by rubric.grade().
ToGradeInput
module-attribute
¶
ToGradeInput = str | ThinkingOutputDict
Union type for to_grade parameter.
Accepts either a plain string or a dict with thinking/output keys.
ThinkingOutputDict¶
TypedDict for responses with separate thinking and output sections.
ThinkingOutputDict
¶
Bases: TypedDict
Dict format for submissions with separate thinking and output sections.
Both fields are optional to allow partial submissions or gradual construction. When used with length penalty, missing fields are treated as empty strings.
ScaleType¶
Enum for criterion scale types (binary, ordinal, nominal).
ScaleType
module-attribute
¶
Scale type for multi-choice criteria.
- ordinal: Options have inherent order (e.g., 1-4 satisfaction scale)
- nominal: Options are unordered categories (e.g., "too few", "too many", "just right")