LogoLogo
AI HubCommunityWebsite
  • Start Here
  • AI Hub
    • Overview
    • Quickstart
    • Teams
    • Device Farm
    • Browser Inference
    • Model Zoo
      • Hailo
      • Intel
      • MemryX
      • BrainChip
      • Google
      • DeGirum
      • Rockchip
    • View and Create Model Zoos
    • Model Compiler
    • PySDK Integration
  • PySDK
    • Overview
    • Quickstart
    • Installation
    • Runtimes and Drivers
      • Hailo
      • OpenVINO
      • MemryX
      • BrainChip
      • Rockchip
      • ONNX
    • PySDK User Guide
      • Core Concepts
      • Organizing Models
      • Setting Up an AI Server
      • Loading an AI Model
      • Running AI Model Inference
      • Model JSON Structure
      • Command Line Interface
      • API Reference Guide
        • PySDK Package
        • Model Module
        • Zoo Manager Module
        • Postprocessor Module
        • AI Server Module
        • Miscellaneous Modules
      • Older PySDK User Guides
        • PySDK 0.16.1
        • PySDK 0.16.0
        • PySDK 0.15.2
        • PySDK 0.15.1
        • PySDK 0.15.0
        • PySDK 0.14.3
        • PySDK 0.14.2
        • PySDK 0.14.1
        • PySDK 0.14.0
        • PySDK 0.13.4
        • PySDK 0.13.3
        • PySDK 0.13.2
        • PySDK 0.13.1
        • PySDK 0.13.0
    • Release Notes
      • Retired Versions
    • EULA
  • DeGirum Tools
    • Overview
      • Streams
        • Streams Base
        • Streams Gizmos
      • Compound Models
      • Inference Support
      • Analyzers
        • Clip Saver
        • Event Detector
        • Line Count
        • Notifier
        • Object Selector
        • Object Tracker
        • Zone Count
  • DeGirumJS
    • Overview
    • Get Started
    • Understanding Results
    • Release Notes
    • API Reference Guides
      • DeGirumJS 0.1.3
      • DeGirumJS 0.1.2
      • DeGirumJS 0.1.1
      • DeGirumJS 0.1.0
      • DeGirumJS 0.0.9
      • DeGirumJS 0.0.8
      • DeGirumJS 0.0.7
      • DeGirumJS 0.0.6
      • DeGirumJS 0.0.5
      • DeGirumJS 0.0.4
      • DeGirumJS 0.0.3
      • DeGirumJS 0.0.2
      • DeGirumJS 0.0.1
  • Orca
    • Overview
    • Benchmarks
    • Unboxing and Installation
    • M.2 Setup
    • USB Setup
    • Thermal Management
    • Tools
  • Resources
    • External Links
Powered by GitBook

Get Started

  • AI Hub Quickstart
  • PySDK Quickstart
  • PySDK in Colab

Resources

  • AI Hub
  • Community
  • DeGirum Website

Social

  • LinkedIn
  • YouTube

Legal

  • PySDK EULA
  • Terms of Service
  • Privacy Policy

© 2025 DeGirum Corp.

On this page
  • Classes
  • VideoSourceGizmo
  • VideoDisplayGizmo
  • VideoSaverGizmo
  • ResizingGizmo
  • AiGizmoBase
  • AiSimpleGizmo
  • AiObjectDetectionCroppingGizmo
  • CropCombiningGizmo
  • AiResultCombiningGizmo
  • AiPreprocessGizmo
  • AiAnalyzerGizmo
  • SinkGizmo

Was this helpful?

  1. DeGirum Tools
  2. Overview
  3. Streams

Streams Gizmos

PreviousStreams BaseNextCompound Models

Last updated 11 days ago

Was this helpful?

This API Reference is based on DeGirum Tools version 0.16.6.

Classes

VideoSourceGizmo

VideoSourceGizmo

Bases:

OpenCV-based video source gizmo.

Captures frames from a video source (camera, video file, etc.) and outputs them as into the pipeline.

Functions

__init__(video_source=None, ...)

__init__(video_source=None, *, stop_composition_on_end=False)

Constructor.

Parameters:

Name
Type
Description
Default

video_source

int or str

A cv2.VideoCapture-compatible video source (device index as int, or file path/URL as str). Defaults to None.

None

stop_composition_on_end

bool

False

run

run()

Run the video capture loop.

Continuously reads frames from the video source and sends each frame (with metadata) downstream until the source is exhausted or abort is signaled.

VideoDisplayGizmo

VideoDisplayGizmo

OpenCV-based video display gizmo.

Displays incoming frames in one or more OpenCV windows.

Functions

__init__(window_titles='Display', ...)

__init__(window_titles='Display', *, show_ai_overlay=False, show_fps=False, stream_depth=10, allow_drop=False, multiplex=False)

Constructor.

Parameters:

Name
Type
Description
Default

window_titles

str or List[str]

Title or list of titles for the display window(s). If a list is provided, multiple windows are opened (one per title). Defaults to "Display".

'Display'

show_ai_overlay

bool

If True, overlay AI inference results on the displayed frame (when available). Defaults to False.

False

show_fps

bool

If True, show the FPS on the display window(s). Defaults to False.

False

stream_depth

int

Depth of the input frame queue. Defaults to 10.

10

allow_drop

bool

If True, allow dropping frames if the input queue is full. Defaults to False.

False

multiplex

bool

If True, use a single input stream and display frames in a round-robin across multiple windows; if False, each window corresponds to its own input stream. Defaults to False.

False

Raises:

Type
Description

Exception

If multiplex is True while allow_drop is also True (unsupported configuration).

run

run()

Run the video display loop.

Fetches frames from the input stream(s) and shows them in the window(s) (with optional overlays and FPS display) until all inputs are exhausted or aborted.

VideoSaverGizmo

VideoSaverGizmo

OpenCV-based video saving gizmo.

Writes incoming frames to an output video file.

Functions

__init__(filename, ...)

__init__(filename, *, show_ai_overlay=False, stream_depth=10, allow_drop=False)

Constructor.

Parameters:

Name
Type
Description
Default

filename

str

Path to the output video file.

required

show_ai_overlay

bool

If True, overlay AI inference results on frames before saving (when available). Defaults to False.

False

stream_depth

int

Depth of the input frame queue. Defaults to 10.

10

allow_drop

bool

If True, allow dropping frames if the input queue is full. Defaults to False.

False

run

run()

Run the video saving loop.

Reads frames from the input stream and writes them to the output file until the stream is exhausted or aborted.

ResizingGizmo

ResizingGizmo

OpenCV-based image resizing/padding gizmo.

Resizes incoming images to a specified width and height, using the chosen padding or cropping method.

Functions

__init__(w, ...)

__init__(w, h, pad_method='letterbox', resize_method='bilinear', stream_depth=10, allow_drop=False)

Constructor.

Parameters:

Name
Type
Description
Default

w

int

Target width for output images.

required

h

int

Target height for output images.

required

pad_method

str

Padding method to use ("stretch", "letterbox", "crop-first", "crop-last"). Defaults to "letterbox".

'letterbox'

resize_method

str

Resampling method to use ("nearest", "bilinear", "area", "bicubic", "lanczos"). Defaults to "bilinear".

'bilinear'

stream_depth

int

Depth of the input frame queue. Defaults to 10.

10

allow_drop

bool

If True, allow dropping frames if the input queue is full. Defaults to False.

False

run

run()

Run the resizing loop.

Resizes each input image according to the configured width, height, padding, and resizing method, then sends the result with updated metadata downstream.

AiGizmoBase

AiGizmoBase

Base class for AI model inference gizmos.

Handles loading the model and iterating over input data for inference in a background thread.

Functions

__init__(model, ...)

__init__(model, *, stream_depth=10, allow_drop=False, inp_cnt=1, **kwargs)

Constructor.

Parameters:

Name
Type
Description
Default

model

Model or str

A DeGirum model object or model name string to load. If a string is provided, the model will be loaded via degirum.load_model() using the given kwargs.

required

stream_depth

int

Depth of the input stream queue. Defaults to 10.

10

allow_drop

bool

If True, allow dropping frames on input overflow. Defaults to False.

False

inp_cnt

int

Number of input streams (for models requiring multiple inputs). Defaults to 1.

1

**kwargs

any

Additional parameters to pass to degirum.load_model() when loading the model (if model is given as a name).

{}

on_result(result)

on_result(result)

abstractmethod

Handle a single inference result (to be implemented by subclasses).

Parameters:

Name
Type
Description
Default

result

InferenceResults

The inference result object from the model.

required

run

run()

Run the model inference loop.

Internally feeds data from the input stream(s) into the model and yields results, invoking on_result for each inference result.

AiSimpleGizmo

AiSimpleGizmo

AI inference gizmo with no custom result processing.

Passes through input frames and attaches the raw inference results to each frame's metadata.

Functions

on_result(result)

on_result(result)

Append the inference result to the input frame's metadata and send it downstream.

Parameters:

Name
Type
Description
Default

result

InferenceResults

The inference result for the current frame.

required

AiObjectDetectionCroppingGizmo

AiObjectDetectionCroppingGizmo

Gizmo that crops detected objects from frames of an object detection model.

Each input frame with object detection results yields one or more cropped images as output.

Output

  • Image: The cropped portion of the original image corresponding to a detected object.

  • Meta-info: A dictionary containing:

    • original_result: Reference to the original detection result (InferenceResults) for the frame.

    • cropped_result: The detection result entry for this specific crop.

    • cropped_index: The index of this object in the original results list.

    • is_last_crop: True if this crop is the last one for the frame.

Note

cropped_index and is_last_crop are only present if at least one object is detected in the frame.

The validate_bbox() method can be overridden in subclasses to filter out undesirable detections.

Functions

__init__(labels, ...)

__init__(labels, *, send_original_on_no_objects=True, crop_extent=0.0, crop_extent_option=CropExtentOptions.ASPECT_RATIO_NO_ADJUSTMENT, crop_aspect_ratio=1.0, stream_depth=10, allow_drop=False)

Constructor.

Parameters:

Name
Type
Description
Default

labels

List[str]

List of class labels to process. Only objects whose class is in this list will be cropped.

required

send_original_on_no_objects

bool

If True, when no objects are detected in a frame, the original frame is sent through. Defaults to True.

True

crop_extent

float

Extra padding around the bounding box, as a percentage of the bbox size. Defaults to 0.0.

0.0

crop_extent_option

CropExtentOptions

Method for applying the crop extent (e.g., aspect ratio adjustment). Defaults to CropExtentOptions.ASPECT_RATIO_NO_ADJUSTMENT.

ASPECT_RATIO_NO_ADJUSTMENT

crop_aspect_ratio

float

Desired aspect ratio (W/H) for the cropped images. Defaults to 1.0.

1.0

stream_depth

int

Depth of the input frame queue. Defaults to 10.

10

allow_drop

bool

If True, allow dropping frames on overflow. Defaults to False.

False

run

run()

Run the object cropping loop.

For each input frame, finds all detected objects (matching the specified labels and passing validation) and sends out a cropped image for each. If no objects are detected and send_original_on_no_objects is True, the original frame is forwarded.

validate_bbox(result, ...)

validate_bbox(result, idx)

Decide whether a detected object should be cropped (can be overridden in subclasses).

Parameters:

Name
Type
Description
Default

result

InferenceResults

The detection result for the frame.

required

idx

int

The index of the object in result.results to validate.

required

Returns:

Name
Type
Description

bool

bool

True if the object should be cropped; False if it should be skipped.

CropCombiningGizmo

CropCombiningGizmo

Gizmo to combine original frames with their after-crop results.

Expects N+1 inputs: one input stream of original frames (index 0), and N input streams of inference results from cropped images. This gizmo synchronizes and attaches the after-crop inference results back to each original frame's metadata.

Functions

__init__(crop_inputs_num=1, ...)

__init__(crop_inputs_num=1, *, stream_depth=10)

Constructor.

Parameters:

Name
Type
Description
Default

crop_inputs_num

int

Number of crop result input streams (excluding the original frame stream). Defaults to 1.

1

stream_depth

int

Depth for each crop input stream's queue. Defaults to 10.

10

_adjust_results(orig_result, ...)

_adjust_results(orig_result, bbox_idx, cropped_results)

Adjust inference results from a crop to the original image's coordinate space.

This converts the coordinates (e.g., bounding boxes, landmarks) of inference results obtained on a cropped image back to the coordinate system of the original image.

Parameters:

Name
Type
Description
Default

orig_result

InferenceResults

The original detection result (InferenceResults) for the full frame.

required

bbox_idx

int

The index of the object in the original result list.

required

cropped_results

list

A list of InferenceResults from the cropped image's inference.

required

Returns:

Name
Type
Description

list

list

A list of adjusted InferenceResults corresponding to the original image coordinates.

_clone_result(result)

_clone_result(result)

Clone an inference result, deep-copying its _inference_results list.

Parameters:

Name
Type
Description
Default

result

InferenceResults

The inference result to clone.

required

Returns:

Type
Description

InferenceResults

A cloned inference result with a deep-copied results list.

run

run()

Run the crop combining loop.

Synchronizes original frames with their corresponding after-crop result streams, merges the inference results from crops back into the original frame's metadata, and sends the updated frame downstream.

AiResultCombiningGizmo

AiResultCombiningGizmo

Gizmo to combine inference results from multiple AI gizmos of the same type.

Functions

__init__(inp_cnt, ...)

__init__(inp_cnt, *, stream_depth=10)

Constructor.

Parameters:

Name
Type
Description
Default

inp_cnt

int

Number of input result streams to combine.

required

stream_depth

int

Depth of each input stream's queue. Defaults to 10.

10

run

run()

Run the result combining loop.

Collects inference results from all input streams, merges their results into a single combined result, and sends it downstream.

AiPreprocessGizmo

AiPreprocessGizmo

Preprocessing gizmo that applies a model's preprocessor to input images.

It generates preprocessed image data to be fed into the model.

Output

  • Data: Preprocessed image bytes ready for model input.

  • Meta-info: Dictionary including:

    • image_input: The original input image.

    • converter: A function to convert coordinates from model output back to the original image.

    • image_result: The preprocessed image (present only if the model is configured to provide it).

Attributes:

Name
Type
Description

key_image_input

str

Metadata key for the original input image.

key_converter

str

Metadata key for the coordinate conversion function.

key_image_result

str

Metadata key for the preprocessed image.

Functions

__init__(model, ...)

__init__(model, *, stream_depth=10, allow_drop=False)

Constructor.

Parameters:

Name
Type
Description
Default

model

Model

The model object (PySDK model) whose preprocessor will be used.

required

stream_depth

int

Depth of the input frame queue. Defaults to 10.

10

allow_drop

bool

If True, allow dropping frames on overflow. Defaults to False.

False

run

run()

Run the preprocessing loop.

Applies the model's preprocessor to each input frame and sends the resulting data (and updated meta-info) downstream.

AiAnalyzerGizmo

AiAnalyzerGizmo

Gizmo to apply a chain of analyzers to an inference result, with optional filtering.

Each analyzer (e.g., EventDetector, EventNotifier) processes the inference result and may add events or notifications. If filters are provided, only results that contain at least one of the specified events/notifications are passed through.

Functions

__init__(analyzers, ...)

__init__(analyzers, *, filters=None, stream_depth=10, allow_drop=False)

Constructor.

Parameters:

Name
Type
Description
Default

analyzers

List

List of analyzer objects to apply (e.g., EventDetector, EventNotifier instances).

required

filters

set

A set of event names or notification names to filter results. Only results that have at least one of these events or notifications will be forwarded (others are dropped). Defaults to None (no filtering).

None

stream_depth

int

Depth of the input frame queue. Defaults to 10.

10

allow_drop

bool

If True, allow dropping frames on overflow. Defaults to False.

False

run

run()

Run the analyzer processing loop.

For each input frame, clones its inference result and runs all analyzers on it (which may add events/notifications). If filters are specified, the result is dropped unless it contains at least one of the specified events or notifications. The possibly modified inference result is appended to the frame's metadata and sent downstream. After processing all frames, all analyzers are finalized.

SinkGizmo

SinkGizmo

Sink gizmo that receives results and accumulates them in an internal queue.

This gizmo does not send data further down the pipeline. Instead, it stores all incoming results so they can be retrieved (for example, by iterating over the gizmo's output in the main thread).

Functions

__call__

__call__()

Retrieve the internal queue for iteration.

Returns:

Name
Type
Description

Stream

The input Stream (queue) of this sink gizmo, which can be iterated to get collected results.

__init__(*, ...)

__init__(*, stream_depth=10, allow_drop=False)

Constructor.

Parameters:

Name
Type
Description
Default

stream_depth

int

Depth of the input queue. Defaults to 10.

10

allow_drop

bool

If True, allow dropping frames on overflow. Defaults to False.

False

run

run()

Run gizmo (no operation).

Immediately returns, as the sink simply collects incoming data without processing.

If True, stop the when the video source is over. Defaults to False.

Bases:

Bases:

Bases:

Bases:

Bases:

Bases:

Bases:

Bases:

Bases:

Bases:

Bases:

AiGizmoBase
Gizmo
StreamData
Gizmo
Gizmo
Gizmo
Gizmo
Gizmo
Gizmo
Gizmo
Gizmo
Gizmo
Gizmo
Composition
Stream