Object Selector

DeGirum Tools API Reference Guide. Select relevant objects while running inference.

This API Reference is based on DeGirum Tools version 0.18.0.

Object Selector Analyzer Module Overview

This module provides an analyzer (ObjectSelector) for selecting the top-K detections from object detection results based on various strategies and optional tracking. It enables intelligent filtering of detection results to focus on the most relevant objects.

Key Features

  • Selection Strategies: Supports selecting by highest confidence score, largest bounding-box area, or by custom metric

  • Tracking Integration: Uses track_id fields to persist selections across frames with configurable timeout

  • Top-K Selection: Configurable number of objects to select per frame

  • Visual Overlay: Draws bounding boxes for selected objects on images

  • Selection Persistence: Maintains selection state across frames when tracking is enabled

  • Timeout Control: Configurable frame count before removing lost objects from selection

Typical Usage

  1. Create an ObjectSelector instance with desired selection parameters

  2. Process each frame's detection results through the selector

  3. Access selected objects from the augmented results

  4. Optionally visualize selected objects using the annotate method

  5. Use selected objects in downstream analyzers for focused processing

Integration Notes

  • Works with any detection results containing bounding boxes and confidence scores

  • Optional integration with ObjectTracker for persistent selection across frames

  • Selected objects are marked in the result object for downstream processing

  • Supports both frame-based and tracking-based selection modes

Key Classes

  • ObjectSelector: Main analyzer class that processes detections and maintains selections

  • ObjectSelectionStrategies: Enumeration of available selection strategies

Configuration Options

  • top_k: Number of objects to select per frame

  • selection_strategy: Strategy for ranking objects (by highest confidence score, by largest bounding box area, or by custom metric)

  • use_tracking: Enable/disable tracking-based selection persistence

  • tracking_timeout: Frames to wait before removing lost objects from selection

  • show_overlay: Enable/disable visual annotations

  • annotation_color: Customize overlay appearance

Classes

ObjectSelectionStrategies

ObjectSelectionStrategies

Bases: Enum

Enumeration of object selection strategies.

Members

CUSTOM_METRIC (int): Selects objects with the highest custom metric value. HIGHEST_SCORE (int): Selects objects with the highest confidence scores. LARGEST_AREA (int): Selects objects with the largest bounding-box area.

ObjectSelector

ObjectSelector

Bases: ResultAnalyzerBase

Selects the top-K detected objects per frame based on a specified strategy.

This analyzer examines the detection results for each frame and retains only the top-K detections according to the chosen ObjectSelectionStrategies (e.g., highest confidence score or largest bounding-box area).

When tracking is enabled, it uses object track_id information to continue selecting the same objects across successive frames, removing an object from the selection if it has not appeared for a certain number of frames (the tracking timeout).

Functions

__init__(*, ...)

__init__(*, top_k=1, metric_threshold=0.0, selection_strategy=ObjectSelectionStrategies.HIGHEST_SCORE, custom_metric=None, use_tracking=True, tracking_timeout=30, show_overlay=True, annotation_color=None)

Constructor.

Parameters:

Name
Type
Description
Default

top_k

int

Number of objects with highest metric value to select. Default 1. When 0, metric_threshold is used instead.

1

metric_threshold

float

Metric value threshold: if top_k is zero, objects with metric value higher than this threshold are selected. Default 0.

0.0

selection_strategy

ObjectSelectionStrategies

Strategy for ranking objects. Default ObjectSelectionStrategies.HIGHEST_SCORE.

HIGHEST_SCORE

custom_metric

callable

Custom metric function to use for ranking. The function should take a detection dictionary and inference result and return a numeric score.

None

use_tracking

bool

Whether to enable tracking-based selection. If True, only objects with a track_id field are selected (requires an ObjectTracker to precede this analyzer in the pipeline). Default True.

True

tracking_timeout

int

Number of frames to wait before removing an object from selection if it is not detected. Default 30.

30

show_overlay

bool

Whether to draw bounding boxes around selected objects on the output image. If False, the image is passed through unchanged. Default True.

True

annotation_color

tuple

RGB color for annotation boxes. Default None (uses the complement of the result overlay color).

None

Raises:

Type
Description

ValueError

If an unsupported selection strategy is provided.

analyze(result)

analyze(result)

Select the top-K objects based on the configured strategy, updating the result.

Uses tracking IDs to update selected objects when tracking is enabled. All other objects not selected are removed from results.

Parameters:

Name
Type
Description
Default

result

Model result with detection information.

required

Returns:

Name
Type
Description

None

The result object is modified in-place.

annotate(result, ...)

annotate(result, image)

Draw bounding boxes for the selected objects on the image.

Parameters:

Name
Type
Description
Default

result

The result containing selected objects.

required

image

ndarray

Image to annotate, shape (H, W, 3) in RGB format.

required

Returns:

Type
Description

ndarray

np.ndarray: Annotated image, shape (H, W, 3) in RGB format.

Last updated

Was this helpful?