Overview
We provide the DeGirum Tools Python package to aid development of AI applications with PySDK. In this group, we'll outline main concepts of DeGirum Tools and provide the API Reference Guide.
Last updated
Was this helpful?
We provide the DeGirum Tools Python package to aid development of AI applications with PySDK. In this group, we'll outline main concepts of DeGirum Tools and provide the API Reference Guide.
Last updated
Was this helpful?
DeGirum Tools extends PySDK with a kit for building multi-threaded, low-latency media pipelines. Where PySDK focuses on running a single model well, DeGirum Tools focuses on everything around it: video ingest, pre-and post-processing, multi-model fusion, result annotation, stream routing, and more.
In one sentence:
DeGirum Tools is a flow-based mini-framework that lets you prototype complex AI applications in a few dozen lines of Python.
The flow behind DeGirum Tools is supported by the Streams subsystem. There are three constituent Python submodules: , , and . In this subsystem, the two most important concepts to understand in streams are gizmos and compositions.
A is a worker that:
Consumes from one or more input streams.
Runs its custom run()
loop (decode, resize, infer, etc.).
Pushes new StreamData
to any number of output streams. StreamData
is described in more detail in .
Because every gizmo lives in its own thread, pipelines scale across CPU cores with almost no user code.
Gizmo families built into DeGirum Tools include:
Video IO
Capture, live preview, archival
Transform
Pre-process frames (letterbox, crop, pad)
AI Inference
Run models, cascade detectors & classifiers
Post-fusion
Merge multi-crop or multi-model outputs
Utility
Collect results in the main thread
start()
– spawn threads
stop()
– signal abort & join
wait()
– block until completion
get_bottlenecks()
– diagnose dropped-frame hotspots
Use it as a context-manager so everything shuts down even on exceptions.
Runs two models in parallel on the same image and concatenates results.
Detector → crops → classifier (adds labels back).
Detector → crops → refined detector (with optional NMS).
Use compound models exactly how you would use normal models:
Analyzers provide advanced processing of inference results with specialized functionality. The available analyzers include:
Saves video clips of detected events
Detects and processes specific events in the video stream
Counts objects crossing defined lines in the scene
Sends notifications for detected events and conditions
Filters and selects specific objects based on criteria
Tracks objects across frames with customizable tracking parameters
Tracks objects entering and exiting defined zones
analyze(result)
– Process and modify inference results by adding custom fields or performing calculations
annotate(result, image)
– Draw overlays on the image (bounding boxes, text labels, etc.)
Analyzers can be attached to any model or compound model:
When used inside a gizmo pipeline, analyzers can filter or decorate results in-flight. They can also accumulate state across frames for multi-frame analysis, with cleanup handled in the finalize()
method.
, ,
,
,
Gizmos pass data around by using the class. A stream is an iterable queue that moves StreamData
objects between threads. Each queue may be bounded (with optional drop policy) to prevent bottlenecks, and it automatically propagates a poison pill sentinel to shut the pipeline down cleanly.
A collects any connected gizmos and controls their life-cycle:
wrap two PySDK models into a singlepredict()
/ predict_batch()
interface. Some compound models classes we provide in DeGirum Tools include:
An subclass provides two key capabilities:
The helpers smooth the edges between PySDK and your application. Inference Support utilities include:
– one-liner to pick AI Hub, AI Server, or local inference.
/ – quick video loops when a full gizmo graph is overkill.
– benchmark a model in <10 LOC.