LogoLogo
AI HubCommunityWebsite
  • Start Here
  • AI Hub
    • Overview
    • Quickstart
    • Teams
    • Device Farm
    • Browser Inference
    • Model Zoo
      • Hailo
      • Intel
      • MemryX
      • BrainChip
      • Google
      • DeGirum
      • Rockchip
    • View and Create Model Zoos
    • Model Compiler
    • PySDK Integration
  • PySDK
    • Overview
    • Quickstart
    • Installation
    • Runtimes and Drivers
      • Hailo
      • OpenVINO
      • MemryX
      • BrainChip
      • Rockchip
      • ONNX
    • PySDK User Guide
      • Core Concepts
      • Organizing Models
      • Setting Up an AI Server
      • Loading an AI Model
      • Running AI Model Inference
      • Model JSON Structure
      • Command Line Interface
      • API Reference Guide
        • PySDK Package
        • Model Module
        • Zoo Manager Module
        • Postprocessor Module
        • AI Server Module
        • Miscellaneous Modules
      • Older PySDK User Guides
        • PySDK 0.16.0
        • PySDK 0.15.2
        • PySDK 0.15.1
        • PySDK 0.15.0
        • PySDK 0.14.3
        • PySDK 0.14.2
        • PySDK 0.14.1
        • PySDK 0.14.0
        • PySDK 0.13.4
        • PySDK 0.13.3
        • PySDK 0.13.2
        • PySDK 0.13.1
        • PySDK 0.13.0
    • Release Notes
      • Retired Versions
    • EULA
  • DeGirum Tools
    • Overview
      • Streams
        • Streams Base
        • Streams Gizmos
      • Compound Models
      • Result Analyzer Base
      • Inference Support
  • DeGirumJS
    • Overview
    • Get Started
    • Understanding Results
    • Release Notes
    • API Reference Guides
      • DeGirumJS 0.1.3
      • DeGirumJS 0.1.2
      • DeGirumJS 0.1.1
      • DeGirumJS 0.1.0
      • DeGirumJS 0.0.9
      • DeGirumJS 0.0.8
      • DeGirumJS 0.0.7
      • DeGirumJS 0.0.6
      • DeGirumJS 0.0.5
      • DeGirumJS 0.0.4
      • DeGirumJS 0.0.3
      • DeGirumJS 0.0.2
      • DeGirumJS 0.0.1
  • Orca
    • Overview
    • Benchmarks
    • Unboxing and Installation
    • M.2 Setup
    • USB Setup
    • Thermal Management
    • Tools
  • Resources
    • External Links
Powered by GitBook

Get Started

  • AI Hub Quickstart
  • PySDK Quickstart
  • PySDK in Colab

Resources

  • AI Hub
  • Community
  • DeGirum Website

Social

  • LinkedIn
  • YouTube

Legal

  • PySDK EULA
  • Terms of Service
  • Privacy Policy

© 2025 DeGirum Corp.

On this page
  • Streaming Toolkit Overview
  • Core Concepts
  • Basic Usage Example
  • Key Steps
  • Advanced Topics
  • Functions
  • load_composition(description, ...)

Was this helpful?

  1. DeGirum Tools
  2. Overview

Streams

PreviousOverviewNextStreams Base

Last updated 17 days ago

Was this helpful?

This API Reference is based on DeGirum Tools version 0.16.5.

Streaming Toolkit Overview

This module provides a streaming toolkit for building multi-threaded processing pipelines, where data (images, video frames, or arbitrary objects) flows through a series of processing blocks called gizmos. The toolkit allows you to:

  • Acquire or generate data from one or more sources (e.g., camera feeds, video files).

  • Process the data in a pipeline (possibly in parallel), chaining multiple gizmos together.

  • Optionally display or save the processed data, or feed it into AI inference models.

  • Orchestrate everything in a , which manages the lifecycle (threads) of all connected gizmos.

Core Concepts

  1. Stream:

    • Represents a queue of data items , such as frames from a camera or images from a directory.

    • Gizmos push (put) data into Streams or read (get) data from them.

    • Streams can optionally drop data (the oldest item) if they reach a specified maximum queue size, preventing pipeline bottlenecks.

  2. Gizmo:

    • A gizmo is a discrete processing node in the pipeline.

    • Each gizmo runs in its own thread, pulling data from its input stream(s), processing it, and pushing results to its output stream(s).

    • Example gizmos include:

      • Video-sourcing gizmos that read frames from a webcam or file.

      • AI inference gizmos that run a model on incoming frames.

      • Video display or saving gizmos that show or store processed frames.

      • Gizmos that perform transformations (resizing, cropping, analyzing) on data.

    • Gizmos communicate via Streams. A gizmo output Stream can feed multiple downstream gizmos.

    • Gizmos keep a list of input streams that they are connected to.

    • Gizmos own their input streams.

  3. Composition:

    • A is a container that holds and manages multiple gizmos (and their Streams).

    • Once gizmos are connected, you can call composition.start() to begin processing. Each gizmo run() method executes in a dedicated thread.

    • Call composition.stop() to gracefully stop processing and wait for threads to finish.

  4. StreamData and StreamMeta:

    • Each item in the pipeline is encapsulated by a object, which holds:

      • data: The actual payload (e.g., an image array, a frame).

      • meta: A object that can hold extra metadata from each gizmo (e.g., a detection result, timestamps, bounding boxes, etc.).

        • Gizmos can append to so that metadata accumulates across the pipeline.

  5. Metadata Flow (StreamMeta):

    • How works:

      • itself is a container that can hold any number of "meta info" objects.

      • Each meta info object is "tagged" with one or more string tags, such as "dgt_video", "dgt_inference", etc.

      • You append new meta info by calling meta.append(my_info, [list_of_tags]).

      • You can retrieve meta info objects by searching with meta.find("tag") (returns all matches) or meta.find_last("tag") (returns the most recent match).

      • Important: A gizmo generally clones (.clone()) the incoming before appending its own metadata to avoid upstream side effects.

      • This design lets each gizmo add new metadata, while preserving what was provided by upstream gizmos.

    • High-Level Example:

      • A camera gizmo outputs frames with meta tagged "dgt_video" containing properties like FPS, width, height, etc.

      • An AI inference gizmo downstream takes StreamData(data=frame, meta=...), runs inference, then:

        1. Clones the metadata container.

        2. Appends its inference results under the "dgt_inference" tag.

      • If two AI gizmos run in series, both will append metadata with the same "dgt_inference" tag. A later consumer can call meta.find("dgt_inference") to get both sets of results or meta.find_last("dgt_inference") to get the most recent result.

Basic Usage Example

A simple pipeline might look like this:

import degirum as dg
from degirum_tools.streams import Composition
from degirum_tools.streams_gizmos import VideoSourceGizmo, VideoDisplayGizmo
import cv2
import time

# Create gizmos. If you are on a laptop or have a webcam attached, VideoSourceGizmo(0) will typically create a gizmo that uses your camera as a video source.
video_source = VideoSourceGizmo(0)
video_display = VideoDisplayGizmo("Camera Preview")

# Connect them
video_source >> video_display

# Build composition
comp = Composition(video_source, video_display)
comp.start(wait=False)  # Don't block main thread

start_time = time.time()
while time.time() - start_time < 10:  # Run for 10 seconds
    cv2.waitKey(5)  # Wait time of 5 ms. Let OpenCV handle window events

comp.stop()
cv2.destroyAllWindows()

Key Steps

  1. Create your gizmos (e.g., VideoSourceGizmo, VideoDisplayGizmo, AI inference gizmos, etc.).

  2. Connect them together using the >> operator (or connect_to() method) to form a processing graph. E.g.:

    source >> processor >> sink
    
  3. (Optional) Wait for the pipeline to finish or perform other tasks. You can query statuses, queue sizes, or get partial results in real time.

  4. Stop the pipeline when done.

Advanced Topics

  • Non-blocking vs Blocking: Streams can drop items if configured (allow_drop=True) to handle real-time feeds.

  • Multiple Inputs or Outputs: Some gizmos can have multiple input streams and/or broadcast results to multiple outputs.

For practical code examples, see the dgstreams_demo.ipynb notebook in the PySDKExamples.

Functions

load_composition(description, ...)

load_composition(description, global_context=None, local_context=None)

The description can be provided as a JSON or YAML file path, a YAML string, or a Python dictionary conforming to the JSON schema defined in composition_definition_schema.

Parameters:

Name
Type
Description
Default

description

str or dict

required

global_context

dict

Global context for evaluating expressions (like using globals()). Defaults to None.

None

local_context

dict

Local context for evaluating expressions (like using locals()). Defaults to None.

None

Returns:

Name
Type
Description

Composition

Initialize a with the top-level gizmo(s).

Start the to launch each gizmo in its own thread.

Error Handling: If any gizmo encounters an error, the can stop the whole pipeline, allowing you to handle exceptions centrally.

Load a of gizmos and connections from a description.

Text description of the in YAML format, or a path to a .json, .yaml, or .yml file containing such a description, or a Python dictionary with the same structure.

A object representing the described gizmo pipeline.

Composition
StreamData
Composition
StreamData
StreamMeta
StreamMeta
StreamMeta
StreamMeta
StreamMeta
Composition
Composition
Composition
Composition
Composition
Composition
Composition