LogoLogo
AI HubCommunityWebsite
  • Start Here
  • AI Hub
    • Overview
    • Quickstart
    • Workspaces
    • Device Farm
    • Browser Inference
    • Model Zoo
      • Hailo
      • Intel
      • MemryX
      • BrainChip
      • Google
      • DeGirum
      • Rockchip
    • View and Create Model Zoos
    • Cloud Compiler
    • PySDK Integration
  • PySDK
    • Overview
    • Quickstart
    • Installation
    • Runtimes and Drivers
      • Hailo
      • OpenVINO
      • MemryX
      • BrainChip
      • Rockchip
      • ONNX
    • PySDK User Guide
      • Core Concepts
      • Organizing Models
      • Setting Up an AI Server
      • Loading an AI Model
      • Running AI Model Inference
      • Model JSON Structure
      • Command Line Interface
      • API Reference Guide
        • PySDK Package
        • Model Module
        • Zoo Manager Module
        • Postprocessor Module
        • AI Server Module
        • Miscellaneous Modules
      • Older PySDK User Guides
        • PySDK 0.16.2
        • PySDK 0.16.1
        • PySDK 0.16.0
        • PySDK 0.15.2
        • PySDK 0.15.1
        • PySDK 0.15.0
        • PySDK 0.14.3
        • PySDK 0.14.2
        • PySDK 0.14.1
        • PySDK 0.14.0
        • PySDK 0.13.4
        • PySDK 0.13.3
        • PySDK 0.13.2
        • PySDK 0.13.1
        • PySDK 0.13.0
    • Release Notes
      • Retired Versions
    • EULA
  • DeGirum Tools
    • Overview
      • Streams
        • Streams Base
        • Streams Gizmos
      • Compound Models
      • Tile Compound Models
      • Analyzers
        • Clip Saver
        • Event Detector
        • Line Count
        • Notifier
        • Object Selector
        • Object Tracker
        • Zone Count
      • Inference Support
      • Support Modules
        • Audio Support
        • Model Evaluation Support
        • Math Support
        • Object Storage Support
        • UI Support
        • Video Support
      • Environment Variables
  • DeGirumJS
    • Overview
    • Get Started
    • Understanding Results
    • Release Notes
    • API Reference Guides
      • DeGirumJS 0.1.3
      • DeGirumJS 0.1.2
      • DeGirumJS 0.1.1
      • DeGirumJS 0.1.0
      • DeGirumJS 0.0.9
      • DeGirumJS 0.0.8
      • DeGirumJS 0.0.7
      • DeGirumJS 0.0.6
      • DeGirumJS 0.0.5
      • DeGirumJS 0.0.4
      • DeGirumJS 0.0.3
      • DeGirumJS 0.0.2
      • DeGirumJS 0.0.1
  • Orca
    • Overview
    • Benchmarks
    • Unboxing and Installation
    • M.2 Setup
    • USB Setup
    • Thermal Management
    • Tools
  • Resources
    • External Links
Powered by GitBook

Get Started

  • AI Hub Quickstart
  • PySDK Quickstart
  • PySDK in Colab

Resources

  • AI Hub
  • Community
  • DeGirum Website

Social

  • LinkedIn
  • YouTube

Legal

  • PySDK EULA
  • Terms of Service
  • Privacy Policy

© 2025 DeGirum Corp.

On this page
  • Video Support Module Overview
  • Classes
  • VideoWriter
  • Attributes
  • count
  • Functions
  • __enter__
  • __exit__(exc_type, ...)
  • __init__(fname, ...)
  • release
  • write(img)
  • ClipSaver
  • Functions
  • __init__(clip_duration, ...)
  • forward(result, ...)
  • join_all_saver_threads
  • MediaServer
  • Functions
  • __del__
  • __enter__
  • __exit__(exc_type, ...)
  • __init__(*, ...)
  • stop
  • VideoStreamer
  • Functions
  • __del__
  • __enter__
  • __exit__(exc_type, ...)
  • __init__(rtsp_url, ...)
  • stop
  • write(img)
  • Functions
  • open_video_stream(video_source=None, ...)
  • get_video_stream_properties(video_source)
  • video_source(stream, ...)
  • create_video_writer(fname, ...)
  • open_video_writer(fname, ...)
  • video2jpegs(video_file, ...)

Was this helpful?

  1. DeGirum Tools
  2. Overview
  3. Support Modules

Video Support

DeGirum Tools API Reference Guide. Read, stream, display and save video or RTSP sources.

This API Reference is based on DeGirum Tools version 0.18.0.

Video Support Module Overview

This module provides comprehensive video stream handling capabilities, including capturing from various sources, saving to files, and managing video clips. It supports local cameras, IP cameras, video files, and YouTube videos.

Key Features

  • Multi-Source Support: Capture from local cameras, IP cameras, video files, and YouTube

  • Video Writing: Save video streams with configurable quality and format

  • Frame Extraction: Convert video files to JPEG sequences

  • Clip Management: Save video clips triggered by events with pre/post buffers

  • FPS Control: Frame rate management for both capture and writing

  • Stream Properties: Query video stream dimensions and frame rate

Typical Usage

  1. Open video streams with open_video_stream()

  2. Process frames using video_source() generator

  3. Save videos with VideoWriter or open_video_writer()

  4. Extract frames using video2jpegs()

  5. Save event-triggered clips with ClipSaver

Integration Notes

  • Works with OpenCV's VideoCapture and VideoWriter

  • Supports YouTube videos through pafy

  • Handles both real-time and file-based video sources

  • Provides context managers for safe resource handling

  • Thread-safe for concurrent video operations

Key Classes

  • VideoWriter: Main class for saving video streams

  • ClipSaver: Manages saving video clips with pre/post buffers

Configuration Options

  • Video quality and format settings

  • Frame rate control

  • Clip duration and buffer size

  • Output file naming and paths

Classes

VideoWriter

VideoWriter

Video stream writer with configurable quality and format.

This class provides functionality to save video streams to files with configurable dimensions, frame rate, and format. It supports both OpenCV and PIL image formats as input.

Use open_video_writer() to create a video writer instance with proper cleanup.

Attributes:

Name
Type
Description

filename

str

Output video file path.

width

int

Video width in pixels.

height

int

Video height in pixels.

fps

float

Target frame rate.

count

int

Number of frames written.

Attributes

count

count

property

Get the number of frames written.

Returns:

Type
Description

int

Number of frames written to the video file.

Functions

__enter__

__enter__()

Enter the context manager.

Returns:

Name
Type
Description

VideoWriter

The current instance.

__exit__(exc_type, ...)

__exit__(exc_type, exc_val, exc_tb)

Exit the context manager.

This method ensures the video writer is properly released when exiting the context.

__init__(fname, ...)

__init__(fname, w=0, h=0, fps=30.0)

Initialize the video writer.

Parameters:

Name
Type
Description
Default

fname

str

Output video file path.

required

w

int

Video width in pixels. If 0, use input frame width. Defaults to 0.

0

h

int

Video height in pixels. If 0, use input frame height. Defaults to 0.

0

fps

float

Target frame rate. Defaults to 30.0.

30.0

Raises:

Type
Description

Exception

If the video writer cannot be created.

release

release()

Release the video writer resources.

This method should be called when finished writing to ensure all resources are properly released.

write(img)

write(img)

Write a frame to the video file.

This method writes a single frame to the video file. The frame can be in either OpenCV (BGR) or PIL format.

Parameters:

Name
Type
Description
Default

img

ImageType

Frame to write. Can be: - OpenCV image (np.ndarray) - PIL Image

required

Raises:

Type
Description

Exception

If the frame cannot be written.

ClipSaver

ClipSaver

Video clip saver with pre/post trigger buffering.

This class provides functionality to save video clips triggered by events, with configurable pre-trigger and post-trigger buffers. It maintains a circular buffer of frames and saves clips when triggers occur.

This class is primarily used by two other components in DeGirum Tools.

  1. ClipSavingAnalyzer wraps ClipSaver and triggers clips from event names found in EventNotifier or EventDetector results.

  2. EventNotifier can instantiate and use ClipSaver to record clips when a notification fires, optionally uploading those clips through NotificationServer.

Attributes:

Name
Type
Description

clip_duration

int

Total length of output clips in frames.

file_prefix

str

Base path for saved clip files.

pre_trigger_delay

int

Frames to include before trigger.

embed_ai_annotations

bool

Whether to include AI annotations in clips.

save_ai_result_json

bool

Whether to save AI results as JSON.

target_fps

float

Frame rate for saved clips.

Functions

__init__(clip_duration, ...)

__init__(clip_duration, file_prefix, *, pre_trigger_delay=0, embed_ai_annotations=True, save_ai_result_json=True, target_fps=30.0)

Initialize the clip saver.

Parameters:

Name
Type
Description
Default

clip_duration

int

Total length of output clips in frames (pre-buffer + post-buffer).

required

file_prefix

str

Base path for saved clip files. Frame number and extension are appended automatically.

required

pre_trigger_delay

int

Frames to include before trigger. Defaults to 0.

0

embed_ai_annotations

bool

True

save_ai_result_json

bool

If True, save a JSON file with raw inference results alongside the video. Defaults to True.

True

target_fps

float

Frame rate for saved clips. Defaults to 30.0.

30.0

Raises:

Type
Description

ValueError

If clip_duration is not positive.

ValueError

If pre_trigger_delay is negative or exceeds clip_duration.

forward(result, ...)

forward(result, triggers=[])

Process a frame and save clips if triggers occur.

This method adds the current frame to the buffer and saves clips if any triggers are present. The saved clips include pre-trigger frames from the buffer.

Parameters:

Name
Type
Description
Default

result

Any

required

triggers

List[str]

List of trigger names that occurred in this frame. Defaults to [].

[]

Returns:

Type
Description

Tuple[List[str], bool]

List of saved clip filenames and whether any clips were saved.

Raises:

Type
Description

Exception

If the frame cannot be saved.

join_all_saver_threads

join_all_saver_threads()

Wait for all clip saving threads to complete.

This method blocks until all background clip saving threads have finished. It's useful to call this before exiting to ensure all clips are properly saved.

Returns:

Type
Description

int

Number of threads that were joined.

MediaServer

MediaServer

Manages the MediaMTX media server as a subprocess.

Starts MediaMTX using a provided config file path. If no config path is given, it runs from the MediaMTX binary's directory.

MediaMTX binary must be installed and available in the system path. Refer to https://github.com/bluenviron/mediamtx for installation instructions.

Functions

__del__

__del__()

Destructor to ensure the media server is stopped.

__enter__

__enter__()

Enables use with context manager.

__exit__(exc_type, ...)

__exit__(exc_type, exc_val, exc_tb)

Stops server when context exits.

__init__(*, ...)

__init__(*, config_path=None, verbose=False)

Initializes and starts the server.

Parameters:

Name
Type
Description
Default

config_path

Optional[str]

Path to an existing MediaMTX YAML config file. If not provided, runs with config file from binary directory.

None

verbose

bool

If True, shows media server output in the console.

False

stop

stop()

Stops the media server process.

VideoStreamer

VideoStreamer

Streams video frames to an RTSP server using FFmpeg. This class uses FFmpeg to stream video frames to an RTSP server. FFmpeg must be installed and available in the system path.

Functions

__del__

__del__()

Destructor to ensure the streamer is stopped.

__enter__

__enter__()

Enables use with context manager.

__exit__(exc_type, ...)

__exit__(exc_type, exc_value, traceback)

Stops streamer when context exits.

__init__(rtsp_url, ...)

__init__(rtsp_url, width, height, *, fps=30.0, pix_fmt='bgr24', verbose=False)

Initializes the video streamer.

Parameters:

Name
Type
Description
Default

rtsp_url

str

RTSP URL to stream to (e.g., 'rtsp://user:password@hostname:port/stream'). Typically you use MediaServer class to start media server and then use its RTSP URL like rtsp://localhost:8554/mystream

required

width

int

Width of the video frames in pixels.

required

height

int

Height of the video frames in pixels.

required

fps

float

Frames per second for the stream. Defaults to 30.

30.0

pix_fmt

str

Pixel format for the input frames. Defaults to 'bgr24'. Can be 'rgb24'.

'bgr24'

verbose

bool

If True, shows FFmpeg output in the console. Defaults to False.

False

stop

stop()

Stops the streamer process.

write(img)

write(img)

Writes a frame to the RTSP stream. Args: img (ImageType): Frame to write. Can be:

  • OpenCV image (np.ndarray)

  • PIL Image

Pixel format must match the one specified in the constructor (default is 'bgr24').

Functions

open_video_stream(video_source=None, ...)

open_video_stream(video_source=None, max_yt_quality=0)

Open a video stream from various sources.

This function provides a context manager for opening video streams from different sources, including local cameras, IP cameras, video files, and YouTube videos. The stream is automatically closed when the context is exited.

Parameters:

Name
Type
Description
Default

video_source

Union[int, str, Path, None]

Video source specification: - int: 0-based index for local cameras - str: IP camera URL (rtsp://user:password@hostname) - str: Local video file path - str: URL to mp4 video file - str: YouTube video URL - None: Use environment variable or default camera

None

max_yt_quality

int

Maximum video quality for YouTube videos in pixels (height). If 0, use best quality. Defaults to 0.

0

Yields:

Type
Description

VideoCapture

cv2.VideoCapture: OpenCV video capture object.

Raises:

Type
Description

Exception

If the video stream cannot be opened.

get_video_stream_properties(video_source)

get_video_stream_properties(video_source)

Return the dimensions and frame rate of a video source.

Parameters:

Name
Type
Description
Default

video_source

Union[int, str, Path, None, VideoCapture]

Video source identifier or an already opened VideoCapture object.

required

Returns:

Type
Description

tuple

(width, height, fps) describing the video stream.

video_source(stream, ...)

video_source(stream, fps=None)

Yield frames from a video stream.

Parameters:

Name
Type
Description
Default

stream

VideoCapture

Open video stream.

required

fps

Optional[float]

Target frame rate cap.

None

Yields:

Type
Description

ndarray

Frames from the stream.

create_video_writer(fname, ...)

create_video_writer(fname, w=0, h=0, fps=30.0)

Create and return a video writer.

Parameters:

Name
Type
Description
Default

fname

str

Output filename for the video file.

required

w

int

Frame width in pixels. 0 uses the width of the first frame. Defaults to 0.

0

h

int

Frame height in pixels. 0 uses the height of the first frame. Defaults to 0.

0

fps

float

Target frames per second. Defaults to 30.0.

30.0

Returns:

Name
Type
Description

VideoWriter

VideoWriter

Open video writer instance.

open_video_writer(fname, ...)

open_video_writer(fname, w=0, h=0, fps=30.0)

Context manager for VideoWriter.

This function creates a video writer, yields it for use inside the context, and releases it automatically on exit.

Parameters:

Name
Type
Description
Default

fname

str

Output filename for the video file.

required

w

int

Frame width in pixels. 0 uses the width of the first frame. Defaults to 0.

0

h

int

Frame height in pixels. 0 uses the height of the first frame. Defaults to 0.

0

fps

float

Target frames per second. Defaults to 30.0.

30.0

Yields:

Name
Type
Description

VideoWriter

VideoWriter

Open video writer instance ready for use.

video2jpegs(video_file, ...)

video2jpegs(video_file, jpeg_path, *, jpeg_prefix='frame_', preprocessor=None)

Convert a video file into a sequence of JPEG images.

Parameters:

Name
Type
Description
Default

video_file

str

Path to the input video file.

required

jpeg_path

str

Directory where JPEG files will be stored.

required

jpeg_prefix

str

Prefix for generated image filenames. Defaults to "frame_".

'frame_'

preprocessor

Callable[[ndarray], ndarray]

Optional function applied to each frame before saving.

None

Returns:

Name
Type
Description

int

int

Number of frames written to jpeg_path.

PreviousUI SupportNextEnvironment Variables

Last updated 1 day ago

Was this helpful?

If True, use .image_overlay to include bounding boxes/labels in the clip. Defaults to True.

object containing the current frame and detection results.

InferenceResults
InferenceResults