Running inference

Learn how to run inference with your model using simple, flexible methods—single images, video streams, or entire folders.

Estimated read time: 1 minute

Once a model is loaded in PySDK (e.g., using ModelSpec(...).load_model()), you can run inference using three core methods:

  • predict: run inference on a single image

  • predict_batch: run inference on a stream or iterator of images (ideal for video streams); for convenience, common video sources (e.g., files, webcams, and RTSP streams) are wrapped by degirum_tools.predict_stream, which uses predict_batch under the hood and handles capture, looping, and cleanup for you

  • predict_dir: run inference on all images in a folder

Last updated

Was this helpful?