Overview

We provide the DeGirum Tools Python package to aid development of AI applications with PySDK. In this group, we'll outline main concepts of DeGirum Tools and provide the API Reference Guide.

This overview was written for DeGirum Tools version 0.18.0.

Core Concepts

DeGirum Tools extends PySDK with a kit for building multi-threaded, low-latency media pipelines. Where PySDK focuses on running a single model well, DeGirum Tools focuses on everything around it: video ingest, pre- and post-processing, multi-model fusion, result annotation, stream routing, and more.

In one sentence:

DeGirum Tools is a flow-based mini-framework that lets you prototype complex AI applications in a few dozen lines of Python.

Streams

The flow behind DeGirum Tools is supported by the Streams subsystem. There are three constituent Python submodules: streams.py, streams_base.py, and streams_gizmos.py. In this subsystem, the two most important concepts in streams are gizmos and compositions.

Gizmos

A Gizmo is a worker that:

1

Consumes from one or more input streams.

2

Runs its custom run() loop (decode, resize, infer, etc.).

3

Pushes new StreamData to any number of output streams. StreamData is described in more detail in streams.py.

Because every gizmo lives in its own thread, pipelines scale across CPU cores with minimal user code.

Gizmo families built into DeGirum Tools include:

Family
Example Classes
Typical Use

Video IO

Capture, live preview, archival

Transform

Pre-process frames (letterbox, crop, pad)

AI Inference

Run models, cascade detectors & classifiers

Post-fusion

Merge multi-crop or multi-model outputs

Utility

Collect results in the main thread

Gizmos pass data around by using the Stream class. A stream is an iterable queue that moves StreamData objects between threads. Each queue may be bounded (with optional drop policy) to prevent bottlenecks, and it automatically propagates a poison pill sentinel to shut the pipeline down cleanly.

Compositions

A Composition collects any connected gizmos and controls their life-cycle:

  • start() – spawn threads

  • stop() – signal abort & join

  • wait() – block until completion

  • get_bottlenecks() – diagnose dropped-frame hotspots

Use it as a context-manager so everything shuts down even on exceptions.

Compound Models

Compound models wrap two PySDK models into a single predict() / predict_batch() interface. Some of the compound model classes provided by DeGirum Tools include:

Class
What it Does

Runs two models in parallel on the same image and concatenates results.

Detector → crops → classifier (adds labels back).

Detector → crops → refined detector (with optional NMS).

Use compound models exactly how you would use normal models:

compound = CroppingAndClassifyingCompoundModel(detector, classifier)
for res in compound.predict_batch(my_images):
    ...

In addition to compound models, you may encounter PseudoModels.

A PseudoModel is a ModelLike object that behaves like a PySDK model but does not actually run inference. Instead, it generates bounding-box results according to predefined rules, such as dividing an image into a grid (TileExtractorPseudoModel) or returning fixed ROIs with optional motion filtering (RegionExtractionPseudoModel).

These pseudo‑models serve as drop‑in “detectors” within compound-model pipelines, allowing ROI extraction and tiling without relying on a real detection network, while still following the standard predict()/predict_batch() interface.

Analyzers

Analyzers provide advanced processing of inference results with specialized functionality. The available analyzers include:

Analyzer
Description

Saves video clips of detected events

Detects and processes specific events in the video stream

Counts objects crossing defined lines in the scene

Sends notifications for detected events and conditions

Filters and selects specific objects based on criteria

Tracks objects across frames with customizable tracking parameters

Tracks objects entering and exiting defined zones

An Analyzer subclass provides two key capabilities:

  • analyze(result) – Process and modify inference results by adding custom fields or performing calculations

  • annotate(result, image) – Draw overlays on the image (bounding boxes, text labels, etc.)

Analyzers can be attached to any model or compound model:

import degirum_tools
import numpy as np

# Create a line counter that counts people crossing a line

LINES: List[Tuple[int, int, int, int]] = [
    (634, 539, 950, 539),
]
window_name = "Line Counter"

# Create an ObjectTracker to track people
tracker = degirum_tools.ObjectTracker(
    trail_depth=20, anchor_point=degirum_tools.AnchorPoint.BOTTOM_CENTER
)

# Create a LineCounter to detect when people cross pre-defined lines
counter = degirum_tools.LineCounter(lines)

# Attach ObjectTracker and LineCounter to the model
degirum_tools.attach_analyzers(model, [tracker, counter])

# Run predictions and print the line counts
for result in degirum_tools.predict_stream(model, video_source):
    if hasattr(result, "line_counts"):
        print([lc.to_dict() for lc in result.line_counts])

When used inside a gizmo pipeline, analyzers can filter or decorate results in-flight. They can also accumulate state across frames for multi-frame analysis, with cleanup handled in the finalize() method.

Inference Support Utilities

The inference_support helpers smooth the edges between PySDK and your application. Inference Support utilities include:

Support Modules

DeGirum Tools includes other Support Modules, including Math Support, UI Support, Video Support, and more. These modules are designed to reduce the amount of boilerplate code needed to run AI applications.

Last updated

Was this helpful?