Running inference
Learn how to run inference with your model using simple, flexible methods—single images, video streams, or entire folders.
Estimated read time: 1 minute
Once a model is loaded in PySDK (e.g., using ModelSpec(...).load_model()), you can run inference using three core methods:
predict: run inference on a single imagepredict_batch: run inference on a stream or iterator of images (ideal for video streams); for convenience, common video sources (e.g., files, webcams, and RTSP streams) are wrapped bydegirum_tools.predict_stream, which usespredict_batchunder the hood and handles capture, looping, and cleanup for youpredict_dir: run inference on all images in a folder
Last updated
Was this helpful?

