Object tracking
Learn how to track objects across video frames using degirum_tools.ObjectTracker. This guide explains how to assign persistent IDs to detections, reduce flicker, and extract motion-based analytics.
Estimated read time: 2 minutes
Object detection gives you bounding boxes per frame, but it doesn’t tell you which box in frame t corresponds to which one in frame t+1.
Tracking links detections over time, assigning stable IDs so you can reduce flicker, handle brief occlusions, and compute per-object metrics like entries/exits, speed, or dwell time.
It’s essential for analytics like lane counts, zone crossings, or any logic that relies on tracking the same object across frames.
Tracker and analyzers

Attach the built-in tracker to assign persistent IDs, smooth detections, and enable downstream analytics like entries, exits, or dwell time. Use the references below for deeper configuration guidance.
Analyzer overview (attachable processors, including the tracker)
Example
import degirum_tools
from degirum_tools import ModelSpec, Display, remote_assets
# Configure and load once
model_spec = ModelSpec(
model_name="yolov8n_coco--640x640_quant_hailort_multidevice_1",
zoo_url="degirum/hailo",
inference_host_address="@local",
model_properties={"device_type": ["HAILORT/HAILO8", "HAILORT/HAILO8L"]},
)
model = model_spec.load_model()
# Input source and classes to track
video_source = remote_assets.traffic
class_list = ["Car"]
# Attach a tracker analyzer
tracker = degirum_tools.ObjectTracker(
class_list=class_list,
track_thresh=0.35,
track_buffer=100,
match_thresh=0.9999,
trail_depth=20,
anchor_point=degirum_tools.AnchorPoint.BOTTOM_CENTER,
)
degirum_tools.attach_analyzers(model, [tracker])
# Run streaming inference and show overlay
with Display("AI Camera") as output_display:
for res in degirum_tools.predict_stream(model, video_source):
output_display.show(res.image_overlay)Last updated
Was this helpful?

