Videos
Learn how to run real-time inference on video streams using predict_stream. This page covers video files, webcams, and RTSP sources—all with minimal setup.
Estimated read time: 2 minutes
The easiest way to run video inference is with degirum_tools.predict_stream.
predict_stream is a convenience wrapper around predict_batch. It opens the video source (file, webcam, or RTSP), reads frames, runs inference, and yields overlay frames (NumPy BGR) you can display or save.
Common setup (used in all cases)
import degirum_tools
from degirum_tools import ModelSpec, Display, remote_assets
# Configure & load once
model_spec = ModelSpec(
model_name="yolov8n_coco--640x640_quant_hailort_multidevice_1",
zoo_url="degirum/hailo",
inference_host_address="@local",
model_properties={"device_type": ["HAILORT/HAILO8", "HAILORT/HAILO8L"]},
)
model = model_spec.load_model()Video file
video_source = "path/to/video.mp4" # or use a built-in sample:
# video_source = remote_assets.traffic
with Display("AI Camera — File") as disp:
for result in degirum_tools.predict_stream(model, video_source):
disp.show(result.image_overlay)Want a quick test? Try video_source = remote_assets.traffic
To stop: Press Ctrl+C. For short runs, break after N frames inside the loop.
Webcam
video_source = 0 # 0=default webcam; use 1,2,... for additional cameras
with Display("AI Camera — Webcam") as disp:
for result in degirum_tools.predict_stream(model, video_source):
disp.show(result.image_overlay)RTSP stream
video_source = "rtsp://user:[email protected]:554/stream1"
with Display("AI Camera — RTSP") as disp:
for result in degirum_tools.predict_stream(model, video_source):
disp.show(result.image_overlay)Last updated
Was this helpful?

