Model Evaluation Support
Model Evaluation Support Module Overview
This module provides base classes and utilities for model evaluation, including performance metrics calculation, ground truth comparison, and evaluation result reporting. It supports various evaluation scenarios and metrics.
Key Features
Base Evaluator Class: Abstract base class for model evaluators
YAML Configuration: Support for evaluator configuration via YAML files
Flexible Evaluation: Support for different evaluation metrics and scenarios
Result Reporting: Standardized evaluation result reporting
Ground Truth Integration: Support for comparing model outputs with ground truth
Typical Usage
Create a custom evaluator by subclassing ModelEvaluatorBase
Configure the evaluator using YAML or constructor parameters
Run evaluation on test datasets
Analyze and report evaluation results
Integration Notes
Works with DeGirum PySDK models
Supports standard evaluation metrics
Handles various input formats
Provides extensible evaluation framework
Key Classes
ModelEvaluatorBase
: Base class for model evaluators
Configuration Options
Model parameters
Evaluation metrics
Dataset paths
Ground truth format
Classes
ModelEvaluatorBase
ModelEvaluatorBase
Bases: ABC
Base class for model evaluators.
This abstract class initializes a model object, loads configuration parameters and defines the interface for performing evaluation.
Parameters:
model
Model
Model instance to evaluate.
required
**kwargs
Any
Arbitrary model or evaluator parameters. Keys matching the model attributes are applied directly to the model. Remaining keys are assigned to the evaluator instance if such attributes exist.
{}
Attributes:
model
Model
The model being evaluated.
Functions
__init__(model, ...)
__init__(model, **kwargs)
Initialize the evaluator.
Parameters:
model
Model
PySDK model object.
required
**kwargs
Any
Arbitrary model or evaluator parameters. Keys must either match model attributes or attributes of ModelEvaluatorBase
.
{}
evaluate(image_folder_path, ...)
evaluate(image_folder_path, ground_truth_annotations_path, max_images=0)
abstractmethod
Evaluate the model on a dataset.
Parameters:
image_folder_path
str
Directory containing evaluation images.
required
ground_truth_annotations_path
str
Path to the ground truth JSON file in COCO format.
required
max_images
int
Maximum number of images to process. 0
uses all images. Defaults to 0
.
0
Returns:
list
Evaluation statistics (algorithm specific).
init_from_yaml(model, ...)
init_from_yaml(model, config_yaml)
classmethod
Construct an evaluator from a YAML file.
Parameters:
model
Model
PySDK model object.
required
config_yaml
Union[str, TextIOBase]
Path or open stream with evaluator configuration in YAML format.
required
Returns:
Self
Self
Instantiated evaluator object.
Last updated
Was this helpful?