Running AI Model Inference
This is a walkthrough for running predictions. You'll learn about input data types, understanding the results, and finally processing inputs in batches for efficiency.
Model.predict()
# Method Signature: Model.predict()
degirum.model.Model.predict(data)import degirum as dg
# Declaring variables
# Set your model, inference host address, model zoo, and token in these variables.
your_model_name = "model-name"
your_host_address = "@cloud" # Can be "@cloud", host:port, or "@local"
your_model_zoo = "degirum/public"
your_token = "<token>"
# Specify the image you will run inference on
your_image = "path/image.jpg"
# Loading a model
model = dg.load_model(
model_name = your_model_name,
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token
# optional parameters, such as overlay_show_probabilities = True
)
# Run a prediction and assign it to result
result = model(your_image)
# Print the prediction result
print(result)Model.predict_batch()
Supported Input Data Types
Check Input Data Type of Your Model
Images
Tensors
Audio
Single Frame Inference
Batch Inference
Example: Iterating over predict_batch results
Example: Attaching frame metadata
Example: Using predict_batch() on a video file
predict_batch() on a video fileInference Results
Example: Combine predict_batch() with image_overlay to show prediction results on original video
Last updated
Was this helpful?

