# Object Tracker

{% hint style="info" %}
This API Reference is based on DeGirum Tools version 1.2.0.
{% endhint %}

## Object Tracker Analyzer Module Overview <a href="#object-tracker-analyzer-module-overview" id="object-tracker-analyzer-module-overview"></a>

Implements multi-object tracking using [BYTETrack algorithm](https://github.com/ifzhang/bytetrack).

Key Features

* **Persistent Object Identity**: Maintains consistent track IDs across frames
* **Class Filtering**: Optionally tracks only specified object classes
* **Track Lifecycle Management**: Handles track creation, updating, and removal
* **Trail Visualization**: Records and displays object movement history
* **Track Retention**: Configurable buffer for handling temporary object disappearances
* **Visual Overlay**: Displays track IDs and optional trails on frames
* **Integration Support**: Provides track IDs for downstream analyzers (e.g., zone counting, line crossing)

Typical Usage

1. Create an `ObjectTracker` instance with desired tracking parameters
2. Process each frame's detection results through the tracker
3. Access track IDs and trails from the augmented results
4. Optionally visualize tracking results using the annotate method
5. Use track IDs in downstream analyzers for advanced analytics

Integration Notes

* Requires detection results with bounding boxes and confidence scores
* Track IDs are added to detection results as `track_id` field
* Trail information is stored in `trails` and `trail_classes` dictionaries
* Works effectively with zone counting and line crossing analyzers
* Supports both frame-based and time-based track retention

Key Classes

* `STrack`: Internal class representing a single tracked object with state
* `ObjectTracker`: Main analyzer class that processes detections and maintains tracks

Configuration Options

* `class_list`: Filter tracking to specific object classes
* `track_thresh`: Confidence threshold for initiating new tracks
* `track_buffer`: Frames to retain tracks after object disappearance
* `match_thresh`: IoU threshold for matching detections to existing tracks
* `trail_depth`: Number of recent positions to keep for trail visualization
* `show_overlay`: Enable/disable visual annotations
* `annotation_color`: Customize overlay appearance

## Classes <a href="#classes" id="classes"></a>

## STrack <a href="#strack" id="strack"></a>

`STrack`

Represents a single tracked object in the multi-object tracking system.

Each STrack holds the object's bounding box state, unique track identifier, detection confidence score, and tracking status (e.g., new, tracked, lost, removed). A Kalman filter is used internally to predict and update the object's state across frames.

Tracks are created when new objects are detected, updated when detections are matched to existing tracks, and can be reactivated if a lost track matches a new detection. This class provides methods to manage the lifecycle of a track (activation, update, reactivation) and utility functions for bounding box format conversion.

Attributes:

| Name           | Type          | Description                                                                                   |
| -------------- | ------------- | --------------------------------------------------------------------------------------------- |
| `track_id`     | `int`         | Unique ID for this track.                                                                     |
| `is_activated` | `bool`        | Whether the track has been activated (confirmed) at least once.                               |
| `state`        | `_TrackState` | Current state of the track (New, Tracked, Lost, or Removed).                                  |
| `start_frame`  | `int`         | Frame index when this track was first activated.                                              |
| `frame_id`     | `int`         | Frame index of the last update for this track (last seen frame).                              |
| `tracklet_len` | `int`         | Number of frames this track has been in the tracked state.                                    |
| `score`        | `float`       | Detection confidence score for the most recent observation of this track.                     |
| `obj_idx`      | `int`         | Index of this object's detection in the frame's results list (used for internal bookkeeping). |

### Attributes <a href="#attributes" id="attributes"></a>

#### ndarray <a href="#tlbr-np.ndarray" id="tlbr-np.ndarray"></a>

`tlbr: np.ndarray`

`property`

Returns the track's bounding box in corner format (x\_min, y\_min, x\_max, y\_max).

Returns:

| Type      | Description                                                          |
| --------- | -------------------------------------------------------------------- |
| `ndarray` | np.ndarray: Bounding box in (x\_min, y\_min, x\_max, y\_max) format. |

#### ndarray <a href="#tlwh-np.ndarray" id="tlwh-np.ndarray"></a>

`tlwh: np.ndarray`

`property`

Returns the track's current bounding box in (x, y, w, h) format.

Returns:

| Type      | Description                                                   |
| --------- | ------------------------------------------------------------- |
| `ndarray` | np.ndarray: Bounding box where (x, y) is the top-left corner. |

### STrack Methods <a href="#strack-methods" id="strack-methods"></a>

#### \_\_init\_\_(tlwh, ...) <a href="#init" id="init"></a>

`__init__(tlwh, score, obj_idx, id_counter)`

Constructor.

Parameters:

| Name         | Type         | Description                                                                       | Default    |
| ------------ | ------------ | --------------------------------------------------------------------------------- | ---------- |
| `tlwh`       | `ndarray`    | Initial bounding box in (x, y, w, h) format, where (x, y) is the top-left corner. | *required* |
| `score`      | `float`      | Detection confidence score for this object.                                       | *required* |
| `obj_idx`    | `int`        | Index of this object's detection in the current frame's results list.             | *required* |
| `id_counter` | `_IDCounter` | Shared counter used to generate globally unique track\_id values.                 | *required* |

#### activate(kalman\_filter, ...) <a href="#activate" id="activate"></a>

`activate(kalman_filter, frame_id)`

Activates this track with an initial detection.

Initializes the track's state using the provided Kalman filter, assigns a new track ID, and sets the track status to "Tracked".

Parameters:

| Name            | Type            | Description                                    | Default    |
| --------------- | --------------- | ---------------------------------------------- | ---------- |
| `kalman_filter` | `_KalmanFilter` | Kalman filter to associate with this track.    | *required* |
| `frame_id`      | `int`           | Frame index at which the track is initialized. | *required* |

#### re\_activate(new\_track, ...) <a href="#re_activate" id="re_activate"></a>

`re_activate(new_track, frame_id, new_id=False)`

Reactivates a track that was previously lost, using a new detection.

Updates the track's state with the new detection's information and sets the state to "Tracked". If new\_id is True, a new track ID is assigned; otherwise, it retains the original ID.

Parameters:

| Name        | Type     | Description                                                 | Default    |
| ----------- | -------- | ----------------------------------------------------------- | ---------- |
| `new_track` | `STrack` | New track (detection) to merge into this lost track.        | *required* |
| `frame_id`  | `int`    | Current frame index at which the track is reactivated.      | *required* |
| `new_id`    | `bool`   | Whether to assign a new ID to the track. Defaults to False. | `False`    |

#### tlbr\_to\_tlwh(tlbr) <a href="#tlbr_to_tlwh" id="tlbr_to_tlwh"></a>

`tlbr_to_tlwh(tlbr)`

`staticmethod`

Converts bounding box from (top-left, bottom-right) to (top-left, width, height).

Parameters:

| Name   | Type      | Description                              | Default    |
| ------ | --------- | ---------------------------------------- | ---------- |
| `tlbr` | `ndarray` | Bounding box in (x1, y1, x2, y2) format. | *required* |

Returns:

| Type      | Description                                      |
| --------- | ------------------------------------------------ |
| `ndarray` | np.ndarray: Bounding box in (x, y, w, h) format. |

#### tlwh\_to\_xyah(tlwh) <a href="#tlwh_to_xyah" id="tlwh_to_xyah"></a>

`tlwh_to_xyah(tlwh)`

`staticmethod`

Converts bounding box from (top-left x, y, width, height) to (center x, y, aspect ratio, height).

Parameters:

| Name   | Type      | Description                          | Default    |
| ------ | --------- | ------------------------------------ | ---------- |
| `tlwh` | `ndarray` | Bounding box in (x, y, w, h) format. | *required* |

Returns:

| Type      | Description                                                             |
| --------- | ----------------------------------------------------------------------- |
| `ndarray` | np.ndarray: Bounding box in (center x, y, aspect ratio, height) format. |

#### update(new\_track, ...) <a href="#update" id="update"></a>

`update(new_track, frame_id)`

Updates this track with a new matched detection.

Incorporates the detection's bounding box and score into this track's state, updates the Kalman filter prediction, and increments the track length. The track state is set to "Tracked".

Parameters:

| Name        | Type     | Description                                      | Default    |
| ----------- | -------- | ------------------------------------------------ | ---------- |
| `new_track` | `STrack` | The new detection track that matched this track. | *required* |
| `frame_id`  | `int`    | Current frame index for the update.              | *required* |

## ObjectTracker <a href="#objecttracker" id="objecttracker"></a>

`ObjectTracker`

Bases: `ResultAnalyzerBase`

Analyzer that tracks objects across frames in a video stream.

This analyzer assigns persistent IDs to detected objects, allowing them to be tracked from frame to frame. It uses the BYTETrack multi-object tracking algorithm to match current detections with existing tracks and manage track life cycles (creation of new tracks, updating of existing ones, and removal of lost tracks). Optionally, tracking can be restricted to specific object classes via the *class\_list* parameter.

After each call to `analyze()`, the input result's detections are augmented with a `"track_id"` field for object identity. If a trail length is specified (non-zero *trail\_depth*), the result will also contain `trails` and `trail_classes` dictionaries: `trails` maps each track ID to a list of recent bounding box coordinates (the object's trail), and `trail_classes` maps each track ID to the object's class label. These facilitate drawing object paths and labeling them.

Functionality

* Unique ID assignment: Provides a unique ID for each object and maintains that ID across frames.
* Class filtering: Ignores detections whose class is not in the specified *class\_list*.
* Track retention buffer: Continues to track objects for *track\_buffer* frames after they disappear.
* Trajectory history: Keeps a history of each object's movement up to *trail\_depth* frames long.
* Overlay support: Can overlay track IDs and trails on frames for visualization.

Typical usage involves calling `analyze()` on each frame's detection results to update tracks, then `annotate()` to visualize or output the tracked results. For instance, in a video processing loop, use `tracker.analyze(detections)` followed by `tracker.annotate(detections, frame)` to maintain and display object tracks.

### ObjectTracker Methods <a href="#objecttracker-methods" id="objecttracker-methods"></a>

#### \_\_init\_\_(\*, ...) <a href="#init" id="init"></a>

`__init__(*, class_list=None, track_thresh=0.25, track_buffer=30, match_thresh=0.8, reset_at_scene_cut=False, anchor_point=AnchorPoint.BOTTOM_CENTER, trail_depth=0, show_overlay=True, annotation_color=None, show_only_track_ids=False)`

Constructor.

Parameters:

| Name                  | Type                   | Description                                                                                                                                                                                                                       | Default         |
| --------------------- | ---------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------- |
| `class_list`          | `List[str]`            | List of object classes to track. If None, all detected classes are tracked.                                                                                                                                                       | `None`          |
| `track_thresh`        | `float`                | Detection confidence threshold for initiating a new track.                                                                                                                                                                        | `0.25`          |
| `track_buffer`        | `int`                  | Number of frames to keep a lost track before removing it.                                                                                                                                                                         | `30`            |
| `match_thresh`        | `float`                | Intersection-over-union (IoU) threshold for matching detections to existing tracks.                                                                                                                                               | `0.8`           |
| `reset_at_scene_cut`  | `bool`                 | If True, resets all tracks when a scene cut is detected. Requires the result to have a `scene_cut` attribute (set by SceneCutDetector). Use this to avoid tracking objects across scene transitions in videos with cuts or edits. | `False`         |
| `anchor_point`        | `AnchorPoint`          | Anchor point on the bounding box used for trail visualization.                                                                                                                                                                    | `BOTTOM_CENTER` |
| `trail_depth`         | `int`                  | Number of recent positions to keep for each track's trail. Set 0 to disable trail tracking.                                                                                                                                       | `0`             |
| `show_overlay`        | `bool`                 | If True, annotate the image; if False, return the original image.                                                                                                                                                                 | `True`          |
| `annotation_color`    | `Tuple[int, int, int]` | RGB tuple to use for annotations. If None, a contrasting color is chosen automatically.                                                                                                                                           | `None`          |
| `show_only_track_ids` | `bool`                 | If True, only track IDs are shown in the annotations. If False, trails and labels are also shown when available.                                                                                                                  | `False`         |

#### analyze(result) <a href="#analyze" id="analyze"></a>

`analyze(result)`

Analyzes a detection result and maintains object tracks across frames.

Matches the current frame's detections to existing tracks, assigns track IDs to each detection, and updates or creates tracks as necessary. If trail\_depth was set, this method also updates each track's trail of past positions.

The input result is updated in-place. Each detection in result.results receives a "track\_id" identifying its track. If trails are enabled, result.trails and result.trail\_classes are updated to reflect the current active tracks.

Parameters:

| Name     | Type                                                                                                                             | Description                                                                                          | Default    |
| -------- | -------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- | ---------- |
| `result` | [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) | Model inference result for the current frame, containing detected object bounding boxes and classes. | *required* |

#### annotate(result, ...) <a href="#annotate" id="annotate"></a>

`annotate(result, image)`

Draws tracking annotations on an image.

If trails are not being used, writes each object's track ID at its bounding box location. If trails are enabled, draws each object's trajectory and labels the end with the object's class name and track ID.

Parameters:

| Name     | Type                                                                                                                             | Description                                                 | Default    |
| -------- | -------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------- | ---------- |
| `result` | [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) | The inference result that was previously analyzed.          | *required* |
| `image`  | `ndarray`                                                                                                                        | The image (in BGR format) on which to draw the annotations. | *required* |

Returns:

| Type      | Description                                            |
| --------- | ------------------------------------------------------ |
| `ndarray` | np.ndarray: The image with tracking annotations drawn. |


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.degirum.com/degirum-tools/analyzers/object_tracker.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
