# Videos

*Estimated read time: 2 minutes*

The easiest way to run video inference is with `degirum_tools.predict_stream`.

`predict_stream` is a convenience wrapper around `predict_batch`. It opens the video source (file, webcam, or RTSP), reads frames, runs inference, and yields overlay frames (NumPy BGR) you can display or save.

## Common setup (used in all cases)

{% code overflow="wrap" %}

```python
import degirum_tools
from degirum_tools import ModelSpec, Display, remote_assets

# Configure & load once
model_spec = ModelSpec(
    model_name="yolov8n_coco--640x640_quant_hailort_multidevice_1",
    zoo_url="degirum/hailo",
    inference_host_address="@local",
    model_properties={"device_type": ["HAILORT/HAILO8", "HAILORT/HAILO8L"]},
)
model = model_spec.load_model()
```

{% endcode %}

## Video file

{% code overflow="wrap" %}

```python
video_source = "path/to/video.mp4"  # or use a built-in sample:
# video_source = remote_assets.traffic

with Display("AI Camera — File") as disp:
    for result in degirum_tools.predict_stream(model, video_source):
        disp.show(result.image_overlay)
```

{% endcode %}

{% hint style="success" %}
Want a quick test? Try `video_source = remote_assets.traffic`

To stop: Press Ctrl+C. For short runs, break after N frames inside the loop.
{% endhint %}

## Webcam

{% code overflow="wrap" %}

```python
video_source = 0  # 0=default webcam; use 1,2,... for additional cameras

with Display("AI Camera — Webcam") as disp:
    for result in degirum_tools.predict_stream(model, video_source):
        disp.show(result.image_overlay)
```

{% endcode %}

## RTSP stream

{% code overflow="wrap" %}

```python
video_source = "rtsp://user:password@192.168.1.10:554/stream1"

with Display("AI Camera — RTSP") as disp:
    for result in degirum_tools.predict_stream(model, video_source):
        disp.show(result.image_overlay)
```

{% endcode %}

{% hint style="info" %}

* **Under the hood**: `predict_stream` uses `predict_batch`—you don’t need to manage capture loops or cleanup.
* **Colorspace**: Frames are returned in OpenCV BGR; no conversion needed.
* **Performance**: For real-time results, use lighter models or lower input sizes. Choose the right `device_type` for your hardware.
* **Custom pipelines**: Need buffering, retries, or other custom logic? Build your own frame iterator and pass it to `model.predict_batch(...)` instead of using `predict_stream`.
  {% endhint %}
