Images
Run inference on a single image using a URL, file path, or NumPy array. This page shows how to use each input type with a loaded model.
Estimated read time: 2 minutes
The simplest way to run inference on an image is to call the model like a function. Model objects are callable—they implement __call__, which delegates to predict. In other words:
result = model(
image_source
) # exactly the same as: result = model.predict(image_source)This page shows how to run inference on three common input types: a URL, a local file path, and a NumPy array.
Common setup (used in all cases)
from degirum_tools import ModelSpec, Display
# Configure & load once
model_spec = ModelSpec(
model_name="yolov8n_coco--640x640_quant_axelera_metis_1",
zoo_url="degirum/axelera",
inference_host_address="@local",
model_properties={"device_type": ["AXELERA/METIS"]},
)
model = model_spec.load_model()Image URL
from degirum_tools import remote_assets # small gallery of public sample URLs
image_url = remote_assets.three_persons # or any reachable image URL (string)
result = model(image_url)
with Display("AI Camera — URL") as output_display:
output_display.show_image(result.image_overlay)Local file path
from pathlib import Path
image_path = Path.home() / "Pictures" / "test.jpg" # change as needed
result = model(str(image_path)) # file paths are accepted
with Display("AI Camera — File") as output_display:
output_display.show_image(result.image_overlay)NumPy array
# pip install opencv-python
import cv2
frame_bgr = cv2.imread("test.jpg") # returns BGR ndarray
if frame_bgr is None:
raise FileNotFoundError("Could not read test.jpg")
result = model(frame_bgr)
with Display("AI Camera — Array") as output_display:
output_display.show_image(result.image_overlay)That’s it—choose the type that matches your input, reuse the common setup, and call model(...) or model.predict(...) to get results.
Last updated
Was this helpful?

