Images
Run inference on a single image using a URL, file path, or NumPy array. This page shows how to use each input type with a loaded model.
Estimated read time: 2 minutes
The simplest way to run inference on an image is to call the model like a function. Model objects are callable—they implement __call__, which delegates to predict. In other words:
result = model(
image_source
) # exactly the same as: result = model.predict(image_source)This page shows how to run inference on three common input types: a URL, a local file path, and a NumPy array.
Common setup (used in all cases)
from degirum_tools import ModelSpec, Display
# Configure & load once
model_spec = ModelSpec(
model_name="yolov8n_coco--640x640_quant_hailort_multidevice_1",
zoo_url="degirum/hailo",
inference_host_address="@local",
model_properties={"device_type": ["HAILORT/HAILO8", "HAILORT/HAILO8L"]},
)
model = model_spec.load_model()Image URL
Local file path
NumPy array
That’s it—choose the type that matches your input, reuse the common setup, and call model(...) or model.predict(...) to get results.
Last updated
Was this helpful?

