# Object tracking

*Estimated read time: 2 minutes*

Object detection gives you bounding boxes per frame, but it doesn’t tell you which box in frame *t* corresponds to which one in frame *t+1*.

Tracking links detections over time, assigning stable IDs so you can reduce flicker, handle brief occlusions, and compute per-object metrics like entries/exits, speed, or dwell time.

It’s essential for analytics like lane counts, zone crossings, or any logic that relies on tracking the same object across frames.

## Tracker and analyzers

<figure><img src="https://387437463-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fw4TFcrlOvSs7ZfsEpUnx%2Fuploads%2Fgit-blob-f4c9a8c4bb966e961f21021de7f1056215be6f04%2Faxelera-cookbook--traffic--video-feed-of-traffic-with-an-object-tracker-analyzer-tracking-cars.gif?alt=media" alt="Video feed of traffic with an object tracker analyzer tracking cars."><figcaption><p>Video feed of traffic with an object tracker analyzer tracking cars.</p></figcaption></figure>

Attach the built-in tracker to assign persistent IDs, smooth detections, and enable downstream analytics like entries, exits, or dwell time. Use the references below for deeper configuration guidance.

* [ObjectTracker reference guide](https://docs.degirum.com/degirum-tools/analyzers/object_tracker)
* [Analyzer overview](https://docs.degirum.com/degirum-tools/analyzers) (attachable processors, including the tracker)

### Example

{% code overflow="wrap" %}

```python
import degirum_tools
from degirum_tools import ModelSpec, Display, remote_assets

# Configure and load once
model_spec = ModelSpec(
    model_name="yolov8n_coco--640x640_quant_axelera_metis_1",
    zoo_url="degirum/axelera",
    inference_host_address="@local",
    model_properties={"device_type": ["AXELERA/METIS"]},
)
model = model_spec.load_model()

# Input source and classes to track
video_source = remote_assets.traffic
class_list = ["Car"]

# Attach a tracker analyzer
tracker = degirum_tools.ObjectTracker(
    class_list=class_list,
    track_thresh=0.35,
    track_buffer=100,
    match_thresh=0.9999,
    trail_depth=20,
    anchor_point=degirum_tools.AnchorPoint.BOTTOM_CENTER,
)
degirum_tools.attach_analyzers(model, [tracker])

# Run streaming inference and show overlay
with Display("AI Camera") as output_display:
    for res in degirum_tools.predict_stream(model, video_source):
        output_display.show(res.image_overlay)
```

{% endcode %}

{% hint style="info" %}

* **Attaching the analyzer:** `degirum_tools.attach_analyzers(model, [tracker])` registers the tracker with the model. After this, each inference result is passed through the tracker to maintain IDs and draw trails.
* `anchor_point`: Controls where the trail connects to each detection box (e.g., `BOTTOM_CENTER`). Choose a point that best represents object motion (`BOTTOM_CENTER` is a good default for vehicles or people on the ground).
* `class_list`: Limits tracking to specific labels (e.g., `["car"]`). Set to `None` to track all classes, or provide multiple labels like `["car", "bus", "truck"]`.
  {% endhint %}
