Clip Saver
Clip Saving Analyzer Module Overview
This module provides an analyzer (ClipSavingAnalyzer
) for recording video snippets triggered by events
or notifications. It captures frames before and after trigger events, saving them as video clips with
optional AI annotations and metadata.
Key Features
Pre/Post Buffering: Configurable frame count before and after trigger events
Optional Overlays: Embed AI bounding boxes and labels in the saved clips
Side-car JSON: Save raw inference results alongside video files
Thread-Safe: Each clip is written by its own worker thread
Frame Rate Control: Configurable target FPS for saved clips
Event Integration: Works with EventDetector and EventNotifier triggers
Storage Support: Optional integration with object storage for clip uploads
Typical Usage
Create a
ClipSavingAnalyzer
instance with desired buffer and output settingsProcess inference results through the analyzer chain
When triggers occur, clips are automatically saved with pre/post frames
Access saved clips and their associated metadata files
Optionally upload clips to object storage for remote access
Integration Notes
Works with any analyzer that adds trigger names to results
Requires video frames to be available in the result object
Supports both local file storage and object storage uploads
Thread-safe for concurrent clip saving operations
Key Classes
ClipSavingAnalyzer
: Main analyzer class for saving video clipsClipSaver
: Internal class handling clip writing and buffering
Configuration Options
clip_duration
: Number of frames to save after triggerclip_prefix
: Base path for saved clip filespre_trigger_delay
: Frames to include before triggerembed_ai_annotations
: Enable/disable AI overlays in clipssave_ai_result_json
: Enable/disable metadata savingtarget_fps
: Frame rate for saved video clips
Classes
ClipSavingAnalyzer
ClipSavingAnalyzer
Result-analyzer that records short video clips whenever one of the configured trigger names appears
in an InferenceResults
. It delegates internally to ClipSaver
, which maintains
a circular buffer, so every clip contains both pre-trigger and post-trigger context.
Functions
__init__(clip_duration, ...)
__init__(clip_duration, triggers, file_prefix, *, pre_trigger_delay=0, embed_ai_annotations=True, save_ai_result_json=True, target_fps=30.0)
Constructor.
Parameters:
clip_duration
int
Total length of the output clip in frames (pre-buffer + post-buffer).
required
triggers
Set[str]
required
file_prefix
str
Path and filename prefix for generated files (frame number & extension are appended automatically).
required
pre_trigger_delay
int
Frames to include before the trigger. Defaults to 0.
0
embed_ai_annotations
bool
If True, use InferenceResults.image_overlay
so bounding boxes/labels are burned into the clip. Defaults to True.
True
save_ai_result_json
bool
If True, dump a JSON file with raw inference results alongside the video. Defaults to True.
True
target_fps
float
Frame rate of the output file. Defaults to 30.0.
30.0
analyze(result)
analyze(result)
Inspect a single InferenceResults
and forward it to internal ClipSaver
if any trigger names are matched.
This method is called automatically for each frame when attached via attach_analyzers
.
Parameters:
result
InferenceResults
Current model output to scan for events/notifications.
required
join_all_saver_threads
join_all_saver_threads()
Block until all background clip-writer threads finish.
Returns:
int
int
Number of threads that were joined.
Last updated
Was this helpful?