# Running inference

*Estimated read time: 1 minute*

Once a model is loaded in PySDK (e.g., using `ModelSpec(...).load_model()`), you can run inference using three core methods:

* `predict`: run inference on a [single image](https://docs.degirum.com/axelera/basics/running-inference/images)
* `predict_batch`: run inference on a stream or iterator of images (ideal for video streams); for convenience, common video sources (e.g., files, webcams, and RTSP streams) are wrapped by [`degirum_tools.predict_stream`](https://docs.degirum.com/axelera/basics/running-inference/videos), which uses `predict_batch` under the hood and handles capture, looping, and cleanup for you
* `predict_dir`: run inference on [all images in a folder](https://docs.degirum.com/axelera/basics/running-inference/folders)
