Object Tracker
Last updated
Was this helpful?
Last updated
Was this helpful?
Implements multi-object tracking using .
Key Features
Persistent Object Identity: Maintains consistent track IDs across frames
Class Filtering: Optionally tracks only specified object classes
Track Lifecycle Management: Handles track creation, updating, and removal
Trail Visualization: Records and displays object movement history
Track Retention: Configurable buffer for handling temporary object disappearances
Visual Overlay: Displays track IDs and optional trails on frames
Integration Support: Provides track IDs for downstream analyzers (e.g., zone counting, line crossing)
Typical Usage
Create an ObjectTracker
instance with desired tracking parameters
Process each frame's detection results through the tracker
Access track IDs and trails from the augmented results
Optionally visualize tracking results using the annotate method
Use track IDs in downstream analyzers for advanced analytics
Integration Notes
Requires detection results with bounding boxes and confidence scores
Track IDs are added to detection results as track_id
field
Trail information is stored in trails
and trail_classes
dictionaries
Works effectively with zone counting and line crossing analyzers
Supports both frame-based and time-based track retention
Key Classes
STrack
: Internal class representing a single tracked object with state
ObjectTracker
: Main analyzer class that processes detections and maintains tracks
Configuration Options
class_list
: Filter tracking to specific object classes
track_thresh
: Confidence threshold for initiating new tracks
track_buffer
: Frames to retain tracks after object disappearance
match_thresh
: IoU threshold for matching detections to existing tracks
trail_depth
: Number of recent positions to keep for trail visualization
show_overlay
: Enable/disable visual annotations
annotation_color
: Customize overlay appearance
STrack
Represents a single tracked object in the multi-object tracking system.
Each STrack holds the object's bounding box state, unique track identifier, detection confidence score, and tracking status (e.g., new, tracked, lost, removed). A Kalman filter is used internally to predict and update the object's state across frames.
Tracks are created when new objects are detected, updated when detections are matched to existing tracks, and can be reactivated if a lost track matches a new detection. This class provides methods to manage the lifecycle of a track (activation, update, reactivation) and utility functions for bounding box format conversion.
Attributes:
track_id
int
Unique ID for this track.
is_activated
bool
Whether the track has been activated (confirmed) at least once.
state
_TrackState
Current state of the track (New, Tracked, Lost, or Removed).
start_frame
int
Frame index when this track was first activated.
frame_id
int
Frame index of the last update for this track (last seen frame).
tracklet_len
int
Number of frames this track has been in the tracked state.
score
float
Detection confidence score for the most recent observation of this track.
obj_idx
int
Index of this object's detection in the frame's results list (used for internal bookkeeping).
tlbr
property
Returns the track's bounding box in corner format (x_min, y_min, x_max, y_max).
Returns:
ndarray
np.ndarray: Bounding box in (x_min, y_min, x_max, y_max) format.
tlwh
property
Returns the track's current bounding box in (x, y, w, h) format.
Returns:
ndarray
np.ndarray: Bounding box where (x, y) is the top-left corner.
__init__(tlwh, score, obj_idx, id_counter)
Constructor.
Parameters:
tlwh
ndarray
Initial bounding box in (x, y, w, h) format, where (x, y) is the top-left corner.
required
score
float
Detection confidence score for this object.
required
obj_idx
int
Index of this object's detection in the current frame's results list.
required
id_counter
_IDCounter
Shared counter used to generate globally unique track_id values.
required
activate(kalman_filter, frame_id)
Activates this track with an initial detection.
Initializes the track's state using the provided Kalman filter, assigns a new track ID, and sets the track status to "Tracked".
Parameters:
kalman_filter
_KalmanFilter
Kalman filter to associate with this track.
required
frame_id
int
Frame index at which the track is initialized.
required
re_activate(new_track, frame_id, new_id=False)
Reactivates a track that was previously lost, using a new detection.
Updates the track's state with the new detection's information and sets the state to "Tracked". If new_id is True, a new track ID is assigned; otherwise, it retains the original ID.
Parameters:
new_track
New track (detection) to merge into this lost track.
required
frame_id
int
Current frame index at which the track is reactivated.
required
new_id
bool
Whether to assign a new ID to the track. Defaults to False.
False
tlbr_to_tlwh(tlbr)
staticmethod
Converts bounding box from (top-left, bottom-right) to (top-left, width, height).
Parameters:
tlbr
ndarray
Bounding box in (x1, y1, x2, y2) format.
required
Returns:
ndarray
np.ndarray: Bounding box in (x, y, w, h) format.
tlwh_to_xyah(tlwh)
staticmethod
Converts bounding box from (top-left x, y, width, height) to (center x, y, aspect ratio, height).
Parameters:
tlwh
ndarray
Bounding box in (x, y, w, h) format.
required
Returns:
ndarray
np.ndarray: Bounding box in (center x, y, aspect ratio, height) format.
update(new_track, frame_id)
Updates this track with a new matched detection.
Incorporates the detection's bounding box and score into this track's state, updates the Kalman filter prediction, and increments the track length. The track state is set to "Tracked".
Parameters:
new_track
The new detection track that matched this track.
required
frame_id
int
Current frame index for the update.
required
ObjectTracker
Analyzer that tracks objects across frames in a video stream.
This analyzer assigns persistent IDs to detected objects, allowing them to be tracked from frame to frame. It uses the BYTETrack multi-object tracking algorithm to match current detections with existing tracks and manage track life cycles (creation of new tracks, updating of existing ones, and removal of lost tracks). Optionally, tracking can be restricted to specific object classes via the class_list parameter.
After each call to analyze()
, the input result's detections are augmented with a "track_id"
field for
object identity. If a trail length is specified (non-zero trail_depth), the result will also containtrails
and trail_classes
dictionaries: trails
maps each track ID to a list of recent bounding box
coordinates (the object's trail), and trail_classes
maps each track ID to the object's class label.
These facilitate drawing object paths and labeling them.
Functionality
Unique ID assignment: Provides a unique ID for each object and maintains that ID across frames.
Class filtering: Ignores detections whose class is not in the specified class_list.
Track retention buffer: Continues to track objects for track_buffer frames after they disappear.
Trajectory history: Keeps a history of each object's movement up to trail_depth frames long.
Overlay support: Can overlay track IDs and trails on frames for visualization.
Typical usage involves calling analyze()
on each frame's detection results to update tracks, thenannotate()
to visualize or output the tracked results. For instance, in a video processing loop, usetracker.analyze(detections)
followed by tracker.annotate(detections, frame)
to maintain and display
object tracks.
__init__(*, class_list=None, track_thresh=0.25, track_buffer=30, match_thresh=0.8, anchor_point=AnchorPoint.BOTTOM_CENTER, trail_depth=0, show_overlay=True, annotation_color=None)
Constructor.
Parameters:
class_list
List[str]
List of object classes to track. If None, all detected classes are tracked.
None
track_thresh
float
Detection confidence threshold for initiating a new track.
0.25
track_buffer
int
Number of frames to keep a lost track before removing it.
30
match_thresh
float
Intersection-over-union (IoU) threshold for matching detections to existing tracks.
0.8
anchor_point
AnchorPoint
Anchor point on the bounding box used for trail visualization.
BOTTOM_CENTER
trail_depth
int
Number of recent positions to keep for each track's trail. Set 0 to disable trail tracking.
0
show_overlay
bool
If True, annotate the image; if False, return the original image.
True
annotation_color
Tuple[int, int, int]
RGB tuple to use for annotations. If None, a contrasting color is chosen automatically.
None
analyze(result)
Analyzes a detection result and maintains object tracks across frames.
Matches the current frame's detections to existing tracks, assigns track IDs to each detection, and updates or creates tracks as necessary. If trail_depth was set, this method also updates each track's trail of past positions.
The input result is updated in-place. Each detection in result.results receives a "track_id" identifying its track. If trails are enabled, result.trails and result.trail_classes are updated to reflect the current active tracks.
Parameters:
result
InferenceResults
Model inference result for the current frame, containing detected object bounding boxes and classes.
required
Bases: