LogoLogo
AI HubCommunityWebsite
  • Start Here
  • AI Hub
    • Overview
    • Quickstart
    • Teams
    • Device Farm
    • Browser Inference
    • Model Zoo
      • Hailo
      • Intel
      • MemryX
      • BrainChip
      • Google
      • DeGirum
      • Rockchip
    • View and Create Model Zoos
    • Model Compiler
    • PySDK Integration
  • PySDK
    • Overview
    • Quickstart
    • Installation
    • Runtimes and Drivers
      • Hailo
      • OpenVINO
      • MemryX
      • BrainChip
      • Rockchip
      • ONNX
    • PySDK User Guide
      • Core Concepts
      • Organizing Models
      • Setting Up an AI Server
      • Loading an AI Model
      • Running AI Model Inference
      • Model JSON Structure
      • Command Line Interface
      • API Reference Guide
        • PySDK Package
        • Model Module
        • Zoo Manager Module
        • Postprocessor Module
        • AI Server Module
        • Miscellaneous Modules
      • Older PySDK User Guides
        • PySDK 0.16.0
        • PySDK 0.15.2
        • PySDK 0.15.1
        • PySDK 0.15.0
        • PySDK 0.14.3
        • PySDK 0.14.2
        • PySDK 0.14.1
        • PySDK 0.14.0
        • PySDK 0.13.4
        • PySDK 0.13.3
        • PySDK 0.13.2
        • PySDK 0.13.1
        • PySDK 0.13.0
    • Release Notes
      • Retired Versions
    • EULA
  • DeGirum Tools
    • Overview
      • Streams
        • Streams Base
        • Streams Gizmos
      • Compound Models
      • Inference Support
      • Analyzers
        • Clip Saver
        • Event Detector
        • Line Count
        • Notifier
        • Object Selector
        • Object Tracker
        • Zone Count
  • DeGirumJS
    • Overview
    • Get Started
    • Understanding Results
    • Release Notes
    • API Reference Guides
      • DeGirumJS 0.1.3
      • DeGirumJS 0.1.2
      • DeGirumJS 0.1.1
      • DeGirumJS 0.1.0
      • DeGirumJS 0.0.9
      • DeGirumJS 0.0.8
      • DeGirumJS 0.0.7
      • DeGirumJS 0.0.6
      • DeGirumJS 0.0.5
      • DeGirumJS 0.0.4
      • DeGirumJS 0.0.3
      • DeGirumJS 0.0.2
      • DeGirumJS 0.0.1
  • Orca
    • Overview
    • Benchmarks
    • Unboxing and Installation
    • M.2 Setup
    • USB Setup
    • Thermal Management
    • Tools
  • Resources
    • External Links
Powered by GitBook

Get Started

  • AI Hub Quickstart
  • PySDK Quickstart
  • PySDK in Colab

Resources

  • AI Hub
  • Community
  • DeGirum Website

Social

  • LinkedIn
  • YouTube

Legal

  • PySDK EULA
  • Terms of Service
  • Privacy Policy

© 2025 DeGirum Corp.

On this page
  • Object Tracker Analyzer Module Overview
  • Classes
  • STrack
  • ObjectTracker

Was this helpful?

  1. DeGirum Tools
  2. Overview
  3. Analyzers

Object Tracker

PreviousObject SelectorNextZone Count

Last updated 5 days ago

Was this helpful?

This API Reference is based on DeGirum Tools version 0.16.6.

Object Tracker Analyzer Module Overview

Implements multi-object tracking using .

Key Features

  • Persistent Object Identity: Maintains consistent track IDs across frames

  • Class Filtering: Optionally tracks only specified object classes

  • Track Lifecycle Management: Handles track creation, updating, and removal

  • Trail Visualization: Records and displays object movement history

  • Track Retention: Configurable buffer for handling temporary object disappearances

  • Visual Overlay: Displays track IDs and optional trails on frames

  • Integration Support: Provides track IDs for downstream analyzers (e.g., zone counting, line crossing)

Typical Usage

  1. Create an ObjectTracker instance with desired tracking parameters

  2. Process each frame's detection results through the tracker

  3. Access track IDs and trails from the augmented results

  4. Optionally visualize tracking results using the annotate method

  5. Use track IDs in downstream analyzers for advanced analytics

Integration Notes

  • Requires detection results with bounding boxes and confidence scores

  • Track IDs are added to detection results as track_id field

  • Trail information is stored in trails and trail_classes dictionaries

  • Works effectively with zone counting and line crossing analyzers

  • Supports both frame-based and time-based track retention

Key Classes

  • STrack: Internal class representing a single tracked object with state

  • ObjectTracker: Main analyzer class that processes detections and maintains tracks

Configuration Options

  • class_list: Filter tracking to specific object classes

  • track_thresh: Confidence threshold for initiating new tracks

  • track_buffer: Frames to retain tracks after object disappearance

  • match_thresh: IoU threshold for matching detections to existing tracks

  • trail_depth: Number of recent positions to keep for trail visualization

  • show_overlay: Enable/disable visual annotations

  • annotation_color: Customize overlay appearance

Classes

STrack

STrack

Represents a single tracked object in the multi-object tracking system.

Each STrack holds the object's bounding box state, unique track identifier, detection confidence score, and tracking status (e.g., new, tracked, lost, removed). A Kalman filter is used internally to predict and update the object's state across frames.

Tracks are created when new objects are detected, updated when detections are matched to existing tracks, and can be reactivated if a lost track matches a new detection. This class provides methods to manage the lifecycle of a track (activation, update, reactivation) and utility functions for bounding box format conversion.

Attributes:

Name
Type
Description

track_id

int

Unique ID for this track.

is_activated

bool

Whether the track has been activated (confirmed) at least once.

state

_TrackState

Current state of the track (New, Tracked, Lost, or Removed).

start_frame

int

Frame index when this track was first activated.

frame_id

int

Frame index of the last update for this track (last seen frame).

tracklet_len

int

Number of frames this track has been in the tracked state.

score

float

Detection confidence score for the most recent observation of this track.

obj_idx

int

Index of this object's detection in the frame's results list (used for internal bookkeeping).

Attributes

tlbr

tlbr

property

Returns the track's bounding box in corner format (x_min, y_min, x_max, y_max).

Returns:

Type
Description

ndarray

np.ndarray: Bounding box in (x_min, y_min, x_max, y_max) format.

tlwh

tlwh

property

Returns the track's current bounding box in (x, y, w, h) format.

Returns:

Type
Description

ndarray

np.ndarray: Bounding box where (x, y) is the top-left corner.

Functions

__init__(tlwh, ...)

__init__(tlwh, score, obj_idx, id_counter)

Constructor.

Parameters:

Name
Type
Description
Default

tlwh

ndarray

Initial bounding box in (x, y, w, h) format, where (x, y) is the top-left corner.

required

score

float

Detection confidence score for this object.

required

obj_idx

int

Index of this object's detection in the current frame's results list.

required

id_counter

_IDCounter

Shared counter used to generate globally unique track_id values.

required

activate(kalman_filter, ...)

activate(kalman_filter, frame_id)

Activates this track with an initial detection.

Initializes the track's state using the provided Kalman filter, assigns a new track ID, and sets the track status to "Tracked".

Parameters:

Name
Type
Description
Default

kalman_filter

_KalmanFilter

Kalman filter to associate with this track.

required

frame_id

int

Frame index at which the track is initialized.

required

re_activate(new_track, ...)

re_activate(new_track, frame_id, new_id=False)

Reactivates a track that was previously lost, using a new detection.

Updates the track's state with the new detection's information and sets the state to "Tracked". If new_id is True, a new track ID is assigned; otherwise, it retains the original ID.

Parameters:

Name
Type
Description
Default

new_track

New track (detection) to merge into this lost track.

required

frame_id

int

Current frame index at which the track is reactivated.

required

new_id

bool

Whether to assign a new ID to the track. Defaults to False.

False

tlbr_to_tlwh(tlbr)

tlbr_to_tlwh(tlbr)

staticmethod

Converts bounding box from (top-left, bottom-right) to (top-left, width, height).

Parameters:

Name
Type
Description
Default

tlbr

ndarray

Bounding box in (x1, y1, x2, y2) format.

required

Returns:

Type
Description

ndarray

np.ndarray: Bounding box in (x, y, w, h) format.

tlwh_to_xyah(tlwh)

tlwh_to_xyah(tlwh)

staticmethod

Converts bounding box from (top-left x, y, width, height) to (center x, y, aspect ratio, height).

Parameters:

Name
Type
Description
Default

tlwh

ndarray

Bounding box in (x, y, w, h) format.

required

Returns:

Type
Description

ndarray

np.ndarray: Bounding box in (center x, y, aspect ratio, height) format.

update(new_track, ...)

update(new_track, frame_id)

Updates this track with a new matched detection.

Incorporates the detection's bounding box and score into this track's state, updates the Kalman filter prediction, and increments the track length. The track state is set to "Tracked".

Parameters:

Name
Type
Description
Default

new_track

The new detection track that matched this track.

required

frame_id

int

Current frame index for the update.

required

ObjectTracker

ObjectTracker

Analyzer that tracks objects across frames in a video stream.

This analyzer assigns persistent IDs to detected objects, allowing them to be tracked from frame to frame. It uses the BYTETrack multi-object tracking algorithm to match current detections with existing tracks and manage track life cycles (creation of new tracks, updating of existing ones, and removal of lost tracks). Optionally, tracking can be restricted to specific object classes via the class_list parameter.

After each call to analyze(), the input result's detections are augmented with a "track_id" field for object identity. If a trail length is specified (non-zero trail_depth), the result will also containtrails and trail_classes dictionaries: trails maps each track ID to a list of recent bounding box coordinates (the object's trail), and trail_classes maps each track ID to the object's class label. These facilitate drawing object paths and labeling them.

Functionality

  • Unique ID assignment: Provides a unique ID for each object and maintains that ID across frames.

  • Class filtering: Ignores detections whose class is not in the specified class_list.

  • Track retention buffer: Continues to track objects for track_buffer frames after they disappear.

  • Trajectory history: Keeps a history of each object's movement up to trail_depth frames long.

  • Overlay support: Can overlay track IDs and trails on frames for visualization.

Typical usage involves calling analyze() on each frame's detection results to update tracks, thenannotate() to visualize or output the tracked results. For instance, in a video processing loop, usetracker.analyze(detections) followed by tracker.annotate(detections, frame) to maintain and display object tracks.

Functions

__init__(*, ...)

__init__(*, class_list=None, track_thresh=0.25, track_buffer=30, match_thresh=0.8, anchor_point=AnchorPoint.BOTTOM_CENTER, trail_depth=0, show_overlay=True, annotation_color=None)

Constructor.

Parameters:

Name
Type
Description
Default

class_list

List[str]

List of object classes to track. If None, all detected classes are tracked.

None

track_thresh

float

Detection confidence threshold for initiating a new track.

0.25

track_buffer

int

Number of frames to keep a lost track before removing it.

30

match_thresh

float

Intersection-over-union (IoU) threshold for matching detections to existing tracks.

0.8

anchor_point

AnchorPoint

Anchor point on the bounding box used for trail visualization.

BOTTOM_CENTER

trail_depth

int

Number of recent positions to keep for each track's trail. Set 0 to disable trail tracking.

0

show_overlay

bool

If True, annotate the image; if False, return the original image.

True

annotation_color

Tuple[int, int, int]

RGB tuple to use for annotations. If None, a contrasting color is chosen automatically.

None

analyze(result)

analyze(result)

Analyzes a detection result and maintains object tracks across frames.

Matches the current frame's detections to existing tracks, assigns track IDs to each detection, and updates or creates tracks as necessary. If trail_depth was set, this method also updates each track's trail of past positions.

The input result is updated in-place. Each detection in result.results receives a "track_id" identifying its track. If trails are enabled, result.trails and result.trail_classes are updated to reflect the current active tracks.

Parameters:

Name
Type
Description
Default

result

InferenceResults

Model inference result for the current frame, containing detected object bounding boxes and classes.

required

Bases:

BYTETrack algorithm
STrack
STrack
ResultAnalyzerBase