LogoLogo
AI HubCommunityWebsite
  • Start Here
  • AI Hub
    • Overview
    • Quickstart
    • Teams
    • Device Farm
    • Browser Inference
    • Model Zoo
      • Hailo
      • Intel
      • MemryX
      • BrainChip
      • Google
      • DeGirum
      • Rockchip
    • View and Create Model Zoos
    • Model Compiler
    • PySDK Integration
  • PySDK
    • Overview
    • Quickstart
    • Installation
    • Runtimes and Drivers
      • Hailo
      • OpenVINO
      • MemryX
      • BrainChip
      • Rockchip
      • ONNX
    • PySDK User Guide
      • Core Concepts
      • Organizing Models
      • Setting Up an AI Server
      • Loading an AI Model
      • Running AI Model Inference
      • Model JSON Structure
      • Command Line Interface
      • API Reference Guide
        • PySDK Package
        • Model Module
        • Zoo Manager Module
        • Postprocessor Module
        • AI Server Module
        • Miscellaneous Modules
      • Older PySDK User Guides
        • PySDK 0.16.0
        • PySDK 0.15.2
        • PySDK 0.15.1
        • PySDK 0.15.0
        • PySDK 0.14.3
        • PySDK 0.14.2
        • PySDK 0.14.1
        • PySDK 0.14.0
        • PySDK 0.13.4
        • PySDK 0.13.3
        • PySDK 0.13.2
        • PySDK 0.13.1
        • PySDK 0.13.0
    • Release Notes
      • Retired Versions
    • EULA
  • DeGirum Tools
    • Overview
      • Streams
        • Streams Base
        • Streams Gizmos
      • Compound Models
      • Inference Support
      • Analyzers
        • Clip Saver
        • Event Detector
        • Line Count
        • Notifier
        • Object Selector
        • Object Tracker
        • Zone Count
  • DeGirumJS
    • Overview
    • Get Started
    • Understanding Results
    • Release Notes
    • API Reference Guides
      • DeGirumJS 0.1.3
      • DeGirumJS 0.1.2
      • DeGirumJS 0.1.1
      • DeGirumJS 0.1.0
      • DeGirumJS 0.0.9
      • DeGirumJS 0.0.8
      • DeGirumJS 0.0.7
      • DeGirumJS 0.0.6
      • DeGirumJS 0.0.5
      • DeGirumJS 0.0.4
      • DeGirumJS 0.0.3
      • DeGirumJS 0.0.2
      • DeGirumJS 0.0.1
  • Orca
    • Overview
    • Benchmarks
    • Unboxing and Installation
    • M.2 Setup
    • USB Setup
    • Thermal Management
    • Tools
  • Resources
    • External Links
Powered by GitBook

Get Started

  • AI Hub Quickstart
  • PySDK Quickstart
  • PySDK in Colab

Resources

  • AI Hub
  • Community
  • DeGirum Website

Social

  • LinkedIn
  • YouTube

Legal

  • PySDK EULA
  • Terms of Service
  • Privacy Policy

© 2025 DeGirum Corp.

On this page
  • Object Selector Analyzer Module Overview
  • Classes
  • ObjectSelectionStrategies
  • ObjectSelector

Was this helpful?

  1. DeGirum Tools
  2. Overview
  3. Analyzers

Object Selector

This API Reference is based on DeGirum Tools version 0.16.6.

Object Selector Analyzer Module Overview

This module provides an analyzer (ObjectSelector) for selecting the top-K detections from object detection results based on various strategies and optional tracking. It enables intelligent filtering of detection results to focus on the most relevant objects.

Key Features

  • Selection Strategies: Supports selecting by highest confidence score or largest bounding-box area

  • Tracking Integration: Uses track_id fields to persist selections across frames with configurable timeout

  • Top-K Selection: Configurable number of objects to select per frame

  • Visual Overlay: Draws bounding boxes for selected objects on images

  • Selection Persistence: Maintains selection state across frames when tracking is enabled

  • Timeout Control: Configurable frame count before removing lost objects from selection

Typical Usage

  1. Create an ObjectSelector instance with desired selection parameters

  2. Process each frame's detection results through the selector

  3. Access selected objects from the augmented results

  4. Optionally visualize selected objects using the annotate method

  5. Use selected objects in downstream analyzers for focused processing

Integration Notes

  • Works with any detection results containing bounding boxes and confidence scores

  • Optional integration with ObjectTracker for persistent selection across frames

  • Selected objects are marked in the result object for downstream processing

  • Supports both frame-based and tracking-based selection modes

Key Classes

  • ObjectSelector: Main analyzer class that processes detections and maintains selections

  • ObjectSelectionStrategies: Enumeration of available selection strategies

Configuration Options

  • top_k: Number of objects to select per frame

  • selection_strategy: Strategy for ranking objects (by highest confidence score or by largest bounding box area)

  • use_tracking: Enable/disable tracking-based selection persistence

  • tracking_timeout: Frames to wait before removing lost objects from selection

  • show_overlay: Enable/disable visual annotations

  • annotation_color: Customize overlay appearance

Classes

ObjectSelectionStrategies

ObjectSelectionStrategies

Bases: Enum

Enumeration of object selection strategies.

Members

HIGHEST_SCORE (int): Selects objects with the highest confidence scores. LARGEST_AREA (int): Selects objects with the largest bounding-box area.

ObjectSelector

ObjectSelector

Selects the top-K detected objects per frame based on a specified strategy.

This analyzer examines the detection results for each frame and retains only the top-K detections according to the chosen ObjectSelectionStrategies (e.g., highest confidence score or largest bounding-box area).

When tracking is enabled, it uses object track_id information to continue selecting the same objects across successive frames, removing an object from the selection if it has not appeared for a certain number of frames (the tracking timeout).

Classes

_SelectedObject

_SelectedObject

dataclass

Selected object data structure.

Attributes:

Name
Type
Description

detection

dict

The detection result dictionary.

counter

int

Frames since last seen before removal.

Functions

__init__(*, ...)

__init__(*, top_k=1, selection_strategy=ObjectSelectionStrategies.HIGHEST_SCORE, use_tracking=True, tracking_timeout=30, show_overlay=True, annotation_color=None)

Constructor.

Parameters:

Name
Type
Description
Default

top_k

int

Number of objects to select. Default 1.

1

selection_strategy

Strategy for ranking objects. Default ObjectSelectionStrategies.HIGHEST_SCORE.

HIGHEST_SCORE

use_tracking

bool

Whether to enable tracking-based selection. If True, only objects with a track_id field are selected (requires an ObjectTracker to precede this analyzer in the pipeline). Default True.

True

tracking_timeout

int

Number of frames to wait before removing an object from selection if it is not detected. Default 30.

30

show_overlay

bool

Whether to draw bounding boxes around selected objects on the output image. If False, the image is passed through unchanged. Default True.

True

annotation_color

tuple

RGB color for annotation boxes. Default None (uses the complement of the result overlay color).

None

Raises:

Type
Description

ValueError

If an unsupported selection strategy is provided.

analyze(result)

analyze(result)

Select the top-K objects based on the configured strategy, updating the result.

Uses tracking IDs to update selected objects when tracking is enabled. All other objects not selected are removed from results.

Parameters:

Name
Type
Description
Default

result

InferenceResults

Model result with detection information.

required

Returns:

Name
Type
Description

None

The result object is modified in-place.

PreviousNotifierNextObject Tracker

Last updated 5 days ago

Was this helpful?

Bases:

ObjectSelectionStrategies
ResultAnalyzerBase