# Video Support

{% hint style="info" %}
This API Reference is based on DeGirum Tools version 1.2.0.
{% endhint %}

## Video Support Module Overview <a href="#video-support-module-overview" id="video-support-module-overview"></a>

This module provides comprehensive video stream handling capabilities, including capturing from various sources, saving to files, and managing video clips. It supports local cameras, IP cameras, video files, and YouTube videos. Key Features:

* **Multi-Source Support**: Capture from local cameras, IP cameras, video files, and YouTube
* **Video Writing**: Save video streams with configurable quality and format
* **Frame Extraction**: Convert video files to JPEG sequences
* **Clip Management**: Save video clips triggered by events with pre/post buffers
* **FPS Control**: Frame rate management for both capture and writing
* **Stream Properties**: Query video stream dimensions and frame rate Typical Usage:

1. Open video streams with `open_video_stream()`
2. Process frames using `video_source()` generator
3. Save videos with `VideoWriter` or `open_video_writer()`
4. Extract frames using `video2jpegs()`
5. Save event-triggered clips with `ClipSaver` Integration Notes:

* Works with OpenCV's VideoCapture and VideoWriter
* Supports YouTube videos through pafy
* Handles both real-time and file-based video sources
* Provides context managers for safe resource handling
* Thread-safe for concurrent video operations Key Classes:
* `VideoWriter`: Main class for saving video streams
* `ClipSaver`: Manages saving video clips with pre/post buffers Configuration Options:
* Video quality and format settings
* Frame rate control
* Clip duration and buffer size
* Output file naming and paths

## Functions <a href="#functions" id="functions"></a>

#### create\_video\_stream(video\_source=None, ...) <a href="#create_video_stream" id="create_video_stream"></a>

`create_video_stream(video_source=None, *, max_yt_quality=0, use_gstreamer=False)`

Create a video stream from various sources.

This function creates and returns video stream object working from different sources, including local cameras, IP cameras, video files, and YouTube videos.

Parameters:

| Name             | Type                                                         | Description                                                                                                                                                                                                                                                                                                                          | Default |
| ---------------- | ------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------- |
| `video_source`   | `Union[int, str, Path, None, VideoCapture, VideoCaptureGst]` | Video source specification: - int: 0-based index for local cameras - str: IP camera URL (rtsp\://user:password\@hostname) - str: Local video file path - str: URL to mp4 video file - str: YouTube video URL - None: Use environment variable or default camera - cv2.VideoCapture or VideoCaptureGst: Pass through existing capture | `None`  |
| `max_yt_quality` | `int`                                                        | Maximum video quality for YouTube videos in pixels (height). If 0, use best quality. Defaults to 0.                                                                                                                                                                                                                                  | `0`     |
| `use_gstreamer`  | `bool`                                                       | If True, use GStreamer backend for video files. Only applies to .mp4 files. Defaults to False.                                                                                                                                                                                                                                       | `False` |

Returns:

| Type                                   | Description                                                |
| -------------------------------------- | ---------------------------------------------------------- |
| `Union[VideoCapture, VideoCaptureGst]` | cv2.VideoCapture or VideoCaptureGst: Video capture object. |

Raises:

| Type        | Description                           |
| ----------- | ------------------------------------- |
| `Exception` | If the video stream cannot be opened. |

#### detect\_rtsp\_cameras(subnet\_cidr, ...) <a href="#detect_rtsp_cameras" id="detect_rtsp_cameras"></a>

`detect_rtsp_cameras(subnet_cidr, *, timeout_s=0.5, port=554, max_workers=16)`

Scan given subnet for RTSP cameras by probing given port with OPTIONS request. Args: subnet\_cidr (str): Subnet in CIDR notation (e.g., '192.168.0.0/24'). timeout\_s (float): Timeout for each connection attempt in seconds. port (int): Port to probe for RTSP cameras (default is 554). max\_workers (int): Maximum number of concurrent threads for scanning (default is 16). Returns: dict: Dictionary with IP addresses as keys and properties as values. Properties include 'require\_auth' indicating if authentication is required.

#### open\_video\_stream(video\_source=None, ...) <a href="#open_video_stream" id="open_video_stream"></a>

`open_video_stream(video_source=None, *, max_yt_quality=0, use_gstreamer=False)`

Open a video stream from various sources.

This function provides a context manager for opening video streams from different sources. The stream is automatically closed when the context is exited. Internally it calls `create_video_stream` to create the stream.

Parameters:

| Name             | Type                                                         | Description                                            | Default |
| ---------------- | ------------------------------------------------------------ | ------------------------------------------------------ | ------- |
| `video_source`   | `Union[int, str, Path, None, VideoCapture, VideoCaptureGst]` | Video source specification (see create\_video\_stream) | `None`  |
| `max_yt_quality` | `int`                                                        | Maximum video quality for YouTube videos               | `0`     |
| `use_gstreamer`  | `bool`                                                       | If True, use GStreamer backend for video files         | `False` |

Yields:

| Type                                   | Description                                                |
| -------------------------------------- | ---------------------------------------------------------- |
| `Union[VideoCapture, VideoCaptureGst]` | cv2.VideoCapture or VideoCaptureGst: Video capture object. |

Raises:

| Type        | Description                           |
| ----------- | ------------------------------------- |
| `Exception` | If the video stream cannot be opened. |

#### get\_video\_stream\_properties(video\_source) <a href="#get_video_stream_properties" id="get_video_stream_properties"></a>

`get_video_stream_properties(video_source)`

Return the dimensions and frame rate of a video source.

Parameters:

| Name           | Type                                                         | Description                                                  | Default    |
| -------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---------- |
| `video_source` | `Union[int, str, Path, None, VideoCapture, VideoCaptureGst]` | Video source identifier or an already opened capture object. | *required* |

Returns:

| Type    | Description                                       |
| ------- | ------------------------------------------------- |
| `tuple` | (width, height, fps) describing the video stream. |

#### video\_source(stream, ...) <a href="#video_source" id="video_source"></a>

`video_source(stream, fps=None, include_metadata=False)`

Yield frames from a video stream.

Parameters:

| Name               | Type                                   | Description                                                                                                                                                    | Default    |
| ------------------ | -------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- |
| `stream`           | `Union[VideoCapture, VideoCaptureGst]` | Open video stream (cv2.VideoCapture or VideoCaptureGst).                                                                                                       | *required* |
| `fps`              | `Optional[float]`                      | Target frame rate cap.                                                                                                                                         | `None`     |
| `include_metadata` | `bool`                                 | If True, yields (frame, metadata) tuples where metadata contains timestamp, frame\_id, fps, frame dimensions. If False, yields only frames. Defaults to False. | `False`    |

Yields:

| Type                                   | Description                                                                                                                                                     |
| -------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `Union[ndarray, Tuple[ndarray, dict]]` | If include\_metadata is False: Frames from the stream (np.ndarray).                                                                                             |
| `Union[ndarray, Tuple[ndarray, dict]]` | If include\_metadata is True: Tuples of (frame, metadata) where metadata is a dict containing 'timestamp', 'frame\_id', 'fps', 'frame\_width', 'frame\_height'. |

#### create\_video\_writer(fname, ...) <a href="#create_video_writer" id="create_video_writer"></a>

`create_video_writer(fname, w=0, h=0, fps=30.0)`

#### open\_video\_writer(fname, ...) <a href="#open_video_writer" id="open_video_writer"></a>

`open_video_writer(fname, w=0, h=0, fps=30.0)`

#### video2jpegs(video\_file, ...) <a href="#video2jpegs" id="video2jpegs"></a>

`video2jpegs(video_file, jpeg_path, *, jpeg_prefix='frame_', preprocessor=None)`

Convert a video file into a sequence of JPEG images.

Parameters:

| Name           | Type                           | Description                                                   | Default    |
| -------------- | ------------------------------ | ------------------------------------------------------------- | ---------- |
| `video_file`   | `str`                          | Path to the input video file.                                 | *required* |
| `jpeg_path`    | `str`                          | Directory where JPEG files will be stored.                    | *required* |
| `jpeg_prefix`  | `str`                          | Prefix for generated image filenames. Defaults to `"frame_"`. | `'frame_'` |
| `preprocessor` | `Callable[[ndarray], ndarray]` | Optional function applied to each frame before saving.        | `None`     |

Returns:

| Name  | Type  | Description                              |
| ----- | ----- | ---------------------------------------- |
| `int` | `int` | Number of frames written to `jpeg_path`. |

## Classes <a href="#classes" id="classes"></a>

## VideoWriter <a href="#videowriter" id="videowriter"></a>

`VideoWriter`

Video stream writer with configurable quality and format.

## ClipSaver <a href="#clipsaver" id="clipsaver"></a>

`ClipSaver`

Video clip saver with pre/post trigger buffering.

This class provides functionality to save video clips triggered by events, with configurable pre-trigger and post-trigger buffers. It maintains a circular buffer of frames and saves clips when triggers occur.

This class is primarily used by two other components in DeGirum Tools.

1. ClipSavingAnalyzer wraps ClipSaver and triggers clips from event names found in EventNotifier or EventDetector results.
2. EventNotifier can instantiate and use ClipSaver to record clips when a notification fires, optionally uploading those clips through NotificationServer.

Attributes:

| Name                   | Type    | Description                                 |
| ---------------------- | ------- | ------------------------------------------- |
| `clip_duration`        | `int`   | Total length of output clips in frames.     |
| `file_prefix`          | `str`   | Base path for saved clip files.             |
| `pre_trigger_delay`    | `int`   | Frames to include before trigger.           |
| `embed_ai_annotations` | `bool`  | Whether to include AI annotations in clips. |
| `save_ai_result_json`  | `bool`  | Whether to save AI results as JSON.         |
| `target_fps`           | `float` | Frame rate for saved clips.                 |

### ClipSaver Methods <a href="#clipsaver-methods" id="clipsaver-methods"></a>

#### \_\_init\_\_(clip\_duration, ...) <a href="#init" id="init"></a>

`__init__(clip_duration, file_prefix, *, pre_trigger_delay=0, embed_ai_annotations=True, save_ai_result_json=True, target_fps=30.0)`

Initialize the clip saver.

Parameters:

| Name                   | Type    | Description                                                                                                                                                                                                                  | Default    |
| ---------------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- |
| `clip_duration`        | `int`   | Total length of output clips in frames (pre-buffer + post-buffer).                                                                                                                                                           | *required* |
| `file_prefix`          | `str`   | Base path for saved clip files. Frame number and extension are appended automatically.                                                                                                                                       | *required* |
| `pre_trigger_delay`    | `int`   | Frames to include before trigger. Defaults to 0.                                                                                                                                                                             | `0`        |
| `embed_ai_annotations` | `bool`  | If True, use [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults).image\_overlay to include bounding boxes/labels in the clip. Defaults to True. | `True`     |
| `save_ai_result_json`  | `bool`  | If True, save a JSON file with raw inference results alongside the video. Defaults to True.                                                                                                                                  | `True`     |
| `target_fps`           | `float` | Frame rate for saved clips. Defaults to 30.0.                                                                                                                                                                                | `30.0`     |

Raises:

| Type         | Description                                                   |
| ------------ | ------------------------------------------------------------- |
| `ValueError` | If clip\_duration is not positive.                            |
| `ValueError` | If pre\_trigger\_delay is negative or exceeds clip\_duration. |

#### forward(result, ...) <a href="#forward" id="forward"></a>

`forward(result, triggers=[])`

Process a frame and save clips if triggers occur.

This method adds the current frame to the buffer and saves clips if any triggers are present. The saved clips include pre-trigger frames from the buffer.

Parameters:

| Name       | Type        | Description                                                                                                                                                                                 | Default    |
| ---------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- |
| `result`   | `Any`       | [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) object containing the current frame and detection results. | *required* |
| `triggers` | `List[str]` | List of trigger names that occurred in this frame. Defaults to \[].                                                                                                                         | `[]`       |

Returns:

| Type                     | Description                                                    |
| ------------------------ | -------------------------------------------------------------- |
| `Tuple[List[str], bool]` | List of saved clip filenames and whether any clips were saved. |

Raises:

| Type        | Description                   |
| ----------- | ----------------------------- |
| `Exception` | If the frame cannot be saved. |

#### join\_all\_saver\_threads <a href="#join_all_saver_threads" id="join_all_saver_threads"></a>

`join_all_saver_threads()`

Wait for all clip saving threads to complete.

This method blocks until all background clip saving threads have finished. It's useful to call this before exiting to ensure all clips are properly saved.

Returns:

| Type  | Description                         |
| ----- | ----------------------------------- |
| `int` | Number of threads that were joined. |

## MediaServer <a href="#mediaserver" id="mediaserver"></a>

`MediaServer`

Manages the MediaMTX media server as a subprocess.

Starts MediaMTX using a provided config file path. If no config path is given, it runs from the MediaMTX binary's directory.

MediaMTX binary must be installed and available in the system path. Refer to <https://github.com/bluenviron/mediamtx> for installation instructions.

### MediaServer Methods <a href="#mediaserver-methods" id="mediaserver-methods"></a>

#### \_\_del\_\_ <a href="#del" id="del"></a>

`__del__()`

Destructor to ensure the media server is stopped.

#### \_\_enter\_\_ <a href="#enter" id="enter"></a>

`__enter__()`

Enables use with context manager.

#### \_\_exit\_\_(exc\_type, ...) <a href="#exit" id="exit"></a>

`__exit__(exc_type, exc_val, exc_tb)`

Stops server when context exits.

#### \_\_init\_\_(\*, ...) <a href="#init" id="init"></a>

`__init__(*, config_path=None, verbose=False)`

Initializes and starts the server.

Parameters:

| Name          | Type            | Description                                                                                                  | Default |
| ------------- | --------------- | ------------------------------------------------------------------------------------------------------------ | ------- |
| `config_path` | `Optional[str]` | Path to an existing MediaMTX YAML config file. If not provided, runs with config file from binary directory. | `None`  |
| `verbose`     | `bool`          | If True, shows media server output in the console.                                                           | `False` |

#### stop <a href="#stop" id="stop"></a>

`stop()`

Stops the media server process.

## VideoStreamer <a href="#videostreamer" id="videostreamer"></a>

`VideoStreamer`

Streams video frames to an RTMP or RTSP server using FFmpeg. This class uses FFmpeg to stream video frames to an RTSP server. FFmpeg must be installed and available in the system path.

### VideoStreamer Methods <a href="#videostreamer-methods" id="videostreamer-methods"></a>

#### \_\_del\_\_ <a href="#del" id="del"></a>

`__del__()`

Destructor to ensure the streamer is stopped.

#### \_\_enter\_\_ <a href="#enter" id="enter"></a>

`__enter__()`

Enables use with context manager.

#### \_\_exit\_\_(exc\_type, ...) <a href="#exit" id="exit"></a>

`__exit__(exc_type, exc_value, traceback)`

Stops streamer when context exits.

#### \_\_init\_\_(stream\_url, ...) <a href="#init" id="init"></a>

`__init__(stream_url, width, height, *, fps=30.0, pix_fmt='bgr24', gop_size=10, verbose=False)`

Initializes the video streamer.

Parameters:

| Name         | Type    | Description                                                                                                                                                                                                        | Default    |
| ------------ | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------- |
| `stream_url` | `str`   | RTMP/RTSP URL to stream to (e.g., 'rtsp\://user:password\@hostname:port/stream'). Typically you use `MediaServer` class to start media server and then use its RTMP/RTSP URL like `rtsp://localhost:8554/mystream` | *required* |
| `width`      | `int`   | Width of the video frames in pixels.                                                                                                                                                                               | *required* |
| `height`     | `int`   | Height of the video frames in pixels.                                                                                                                                                                              | *required* |
| `fps`        | `float` | Frames per second for the stream. Defaults to 30.                                                                                                                                                                  | `30.0`     |
| `pix_fmt`    | `str`   | Pixel format for the input frames. Defaults to 'bgr24'. Can be 'rgb24'.                                                                                                                                            | `'bgr24'`  |
| `gop_size`   | `int`   | GOP size for the video stream. Defaults to 50.                                                                                                                                                                     | `10`       |
| `verbose`    | `bool`  | If True, shows FFmpeg output in the console. Defaults to False.                                                                                                                                                    | `False`    |

#### stop <a href="#stop" id="stop"></a>

`stop()`

Stops the streamer process.

#### write(img) <a href="#write" id="write"></a>

`write(img)`

Writes a frame to the RTSP stream. Args: img (ImageType): Frame to write. Can be:

* OpenCV image (np.ndarray)
* PIL Image

{% code overflow="wrap" %}

```
Pixel format must match the one specified in the constructor (default is 'bgr24').
```

{% endcode %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.degirum.com/degirum-tools/support/video_support.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
