Loading an AI Model
This is an end-to-end guide for loading a model. You'll start with connecting to an inference engine and model zoo, learn about filtering model lists, then loading a model.
Connect to an Inference Engine and Model Zoo
The degirum.connect()
function is the starting point for interacting with PySDK. It establishes a connection with the appropriate AI inference engine and model zoo based on the configuration you provide.
degirum.connect(
inference_host_address = "@cloud",
zoo_url = "workspace/zoo",
token = "<your_token>"
)
When you call degirum.connect()
, you will:
Specify an inference host to run AI models
Specify a model zoo from which AI models can be loaded
Authenticate with the AI Hub using a token
degirum.connect()
creates and returns a ZooManager
object. This object enables:
Searching for models available in the connected model zoo.
Loading AI models and creating appropriate AI model handling objects for inference.
Accessing model parameters to customize inference behavior.
degirum.connect()
degirum.connect()
takes three parameters: inference_host_address
, zoo_url
, and token
. These parameters define where the inference will be run, what zoo will be used, and the token used for AI Hub authentication if an AI Hub model zoo will be used.
Zoo URL Parsing Behavior
The zoo_url
parameter is handled the following ways:
AI Hub Inference (
inference_host_address="@cloud"
)zoo_url
may be given ashttps://hub.degirum.com/workspace/zoo
or simplyworkspace/zoo
.If omitted, the public zoo
degirum/public
is used.
Local Inference (
inference_host_address="@local"
)When
zoo_url
starts withhttp://
orhttps://
or contains exactly one slash (e.g.workspace/zoo
), it is treated as an AI Hub zoo.If the
zoo_url
is instead formatted differently, it will be handled like a local path. Prefix the path withfile://
to be explicit that it is a local path. If the path does not exist,degirum.connect()
raises"incorrect local model zoo URL: path does not exist"
. The path can point to either a directory or a model .json file.
AI Server Inference (hostname or
host:port
)An empty
zoo_url
or a value starting withaiserver://
selects the AI server's local zoo.Any other value must be a valid AI Hub zoo URL; otherwise, the error
"incorrect cloud model zoo URL"
is raised.
# Function Signature: degirum.connect()
degirum.connect(inference_host_address, zoo_url=None, token=None)
The following table lists all possible combinations of inference and zoo types with degirum.connect():
AI Hub
Default: AI Hub Public Model Zoo (degirum/public
)
degirum.connect(
inference_host_address="@cloud"
token="<your_token>"
)
AI Hub
Specified AI Hub Model Zoo
degirum.connect(
inference_host_address="@cloud",
zoo_url="workspace/zoo",
token="<your_token>"
)
AI Server
Default: AI Server Local Zoo
degirum.connect(
inference_host_address="host:port"
)
AI Server
Specified AI Hub Model Zoo
degirum.connect(
inference_host_address="host:port",
zoo_url="workspace/zoo",
token="<your_token>"
)
AI Server
AI Server Local Zoo
degirum.connect(
inference_host_address="host:port",
zoo_url="aiserver://"
)
Local
Default: AI Hub Public Model Zoo (degirum/public
)
degirum.connect(
inference_host_address="@local"
)
Local
Specified AI Hub Model Zoo
degirum.connect(
inference_host_address="@local",
zoo_url="workspace/zoo",
token="<your_token>"
)
Local
Local Folder
degirum.connect(
inference_host_address="@local",
zoo_url="file://path/to/zoo"
)
Local
Local File
degirum.connect(
inference_host_address="@local",
zoo_url="file://path/to/model.json"
)
Retrieve Supported Devices, Filter Models, then Load Models
ZooManager.supported_device_types()
The ZooManager.supported_device_types()
method returns a list of runtime and device combinations (in "RUNTIME/DEVICE"
format) that the connected inference engine supports.
Example:
import degirum as dg
# Set your inference host address, model zoo, and token in these variables.
your_host_address = "@cloud" # Can be "@cloud", host:port, or "@local"
your_model_zoo = "degirum/public"
your_token = "<your_token>"
# Connect to DeGirum Application Server and an AI Hub model zoo
inference_manager = dg.connect(
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token
)
supported_types = inference_manager.supported_device_types()
print(supported_types)
Output:
['N2X/ORCA1', 'TFLITE/EDGETPU', 'OPENVINO/CPU']
In this example, the inference engine returns a list of supported device types on the application server. In thise case, it's the application server hosted on @cloud
.
ZooManager.list_models()
After obtaining a ZooManager
object, you can use the ZooManager.list_models()
method to retrieve and filter the list of available AI models.
# Method Signature: ZooManager.list_models()
degirum.zoo_manager.ZooManager.list_models(*args, **kwargs)
This method:
Filters models based on various criteria such as model family, runtime, device type, precision, postprocessor type, and more.
Returns a list of model names that can be used later when loading models for inference.
Use Cases
Exploring available model families (e.g., mobilenet, YOLO).
Filtering models based on target hardware.
Selecting models for specific precision or density.
Example Usage
import degirum as dg
# Set your inference host address, model zoo, and token in these variables.
your_host_address = "@cloud" # Can be "@cloud", host:port, or "@local"
your_model_zoo = "degirum/public"
your_token = "<your_token>"
# Connect to DeGirum Application Server and an AI Hub model zoo
inference_manager = dg.connect(
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token
)
model_list = inference_manager.list_models(device_type=["OPENVINO/CPU"])
print(model_list)
In this example, the inference engine returns a list of models that run using the OpenVINO runtime on a CPU.
Available Filtering Parameters and Their Sources
The ZooManager.list_models()
method filters models based on information retrieved from the model name string and also the model JSON fields.
For models named based on our recommended model naming conventions, the model name string store the model_family
, precision
, and pruned
parameters.
The model JSON file specifies the RuntimeAgent
, DeviceType
, and SupportedDeviceTypes
fields.
model_family
Any valid substring like "yolo"
, "mobilenet"
Extracted from the model name
precision
"quant"
(quantized model), "float"
(floating-point model)
Inferred from the presence of precision-related fields in the model name
pruned
"dense"
(dense model), "pruned"
(sparse/pruned model)
Determined from suffixes indicating density in the model name (e.g., "pruned"
or "dense"
)
runtime
See Supported Hardware for the full list.
Combines information from "RuntimeAgent"
and "SupportedDeviceTypes"
device
See Supported Hardware for the full list.
Combines information from "DeviceType"
and "SupportedDeviceTypes"
Combine list_models() with supported_device_types() to Find Supported Models
To find only the models that are compatible with the current inference engine, you can use the supported_device_types()
method as a filter for list_models()
.
Example:
import degirum as dg
# Set your inference host address, model zoo, and token in these variables.
your_host_address = "@cloud" # Can be "@cloud", host:port, or "@local"
your_model_zoo = "degirum/public"
your_token = "<your_token>"
# Connect to DeGirum Application Server and an AI Hub model zoo
inference_manager = dg.connect(
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token
)
# Retrieve supported device types
supported_types = inference_manager.supported_device_types()
# List models that match any supported runtime/device combination
supported_models = inference_manager.list_models(device_type=list(supported_types))
print("Models supported by the current inference engine:")
for model in supported_models:
print(model)
ZooManager.load_model()
Once you have obtained supported models from filtering list_models() with supported_device_types()
methods, you can load all of the resulting models for inference using the degirum.zoo_manager.ZooManager.load_model()
method.
Basic Usage
To load a model, pass the model name string as the model_name
argument to load_model()
.
Example:
import degirum as dg
# Set your inference host address, model zoo, and token in these variables.
your_host_address = "@cloud" # Can be "@cloud", host:port, or "@local"
your_model_zoo = "degirum/public"
your_token = "<your_token>"
# Connect to DeGirum Application Server and an AI Hub model zoo
inference_manager = dg.connect(
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token
)
# Retrieve supported device types
supported_types = inference_manager.supported_device_types()
# List models that match any supported runtime/device combination
supported_models = inference_manager.list_models(device_type=list(supported_types))
# Load the first model from the list
model = inference_manager.load_model(model_name=supported_models[0])
If a model with the specified name is found, the method returns a
degirum.model.Model
object that you can use to run inference.If the model is not found, an exception will be raised.
Passing Model Properties as Arguments
You can pass additional model properties as keyword arguments to customize the behavior of the loaded model. These properties are directly assigned to the model object.
Example:
import degirum as dg
# Set your inference host address, model zoo, and token in these variables.
your_host_address = "@cloud" # Can be "@cloud", host:port, or "@local"
your_model_zoo = "degirum/public"
your_token = "<your_token>"
# Connect to DeGirum Application Server and an AI Hub model zoo
inference_manager = dg.connect(
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token
)
# Retrieve supported device types
supported_types = inference_manager.supported_device_types()
# List models that match any supported runtime/device combination
supported_models = inference_manager.list_models(device_type=list(supported_types))
# Load the first model from the list
model = inference_manager.load_model(
model_name=supported_models[0],
output_confidence_threshold=0.5,
input_pad_method="letterbox"
)
print(model)
In this example:
output_confidence_threshold=0.5
sets a confidence threshold for inference results.input_pad_method="letterbox"
specifies the padding method to maintain the input aspect ratio.
Convenience Functions
degirum.get_supported_devices()
You can retrieve supported device types using the degirum.get_supported_devices()
function. This function combines the arguments of both degirum.connect()
and degirum.zoo_manager.ZooManager.get_supported_devices()
, allowing you to list supported devices with a single call.
Function Signature:
degirum.get_supported_devices(inference_host_address, zoo_url='', token='')
Example:
import degirum as dg
# Set your inference host address, model zoo, and token in these variables.
your_host_address = "@cloud" # Can be "@cloud", host:port, or "@local"
your_model_zoo = "degirum/public"
your_token = "<your_token>"
# Retrieve supported device types
supported_devices = dg.get_supported_devices(
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token
)
print(supported_devices)
degirum.list_models()
You can retrieve the list of models using the degirum.list_models()
function. This function combines the arguments of both degirum.connect()
and degirum.zoo_manager.ZooManager.list_models()
, allowing you list models with a single call.
Function Signature:
degirum.list_models(inference_host_address, zoo_url, token=None, **kwargs)
Example:
import degirum as dg
# Set your inference host address, model zoo, and token in these variables.
your_host_address = "@cloud" # Can be "@cloud", host:port, or "@local"
your_model_zoo = "degirum/public"
your_token = "<your_token>"
# List models from the AI Hub model zoo with specific filtering criteria
model_list = dg.list_models(
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token,
device_type=["OPENVINO/CPU"]
)
print(model_list)
In this example, the method connects to the specified model zoo and returns a list of models that run with the OpenVINO runtime on a CPU.
degirum.load_model()
For convenience, you can directly load a model without explicitly obtaining a ZooManager
object with degirum.connect()
. The degirum.load_model()
function combines the arguments of degirum.connect()
and ZooManager.load_model()
, allowing you to load models with a single call.
Function Signature:
degirum.load_model(model_name, inference_host_address, zoo_url=None, token=None, **kwargs)
Example:
import degirum as dg
# Set your inference host address, model zoo, and token in these variables.
your_host_address = "@cloud" # Can be "@cloud", host:port, or "@local"
your_model_zoo = "degirum/public"
your_token = "<your_token>"
# Retrieve supported device types
supported_devices = dg.get_supported_devices(
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token,
)
# List models from the AI Hub model zoo with specific filtering criteria
model_list = dg.list_models(
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token,
device_type=list(supported_devices)
)
# Load the first model from the list with some optional parameters
model = dg.load_model(
model_name = list(model_list)[0],
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token,
overlay_show_probabilities = True
output_confidence_threshold = 0.5
)
print(model)
In this example, the code gets supported devices, lists models, then loads a model based on the filtered list with a specified parameter.
Minimum Code Example
Once you understand the steps above, you can streamline the code into a few lines.
import degirum as dg
# Declaring variables
# Set your model name, inference host address, model zoo, and AI Hub token.
your_model_name = "model-name"
your_host_address = "@cloud" # Can be "@cloud", host:port, or "@local"
your_model_zoo = "degirum/public"
your_token = "<your_token>"
# Loading a model
model = dg.load_model(
model_name = your_model_name,
inference_host_address = your_host_address,
zoo_url = your_model_zoo,
token = your_token
# optional parameters, such as overlay_show_probabilities = True
)
print(model)
Last updated
Was this helpful?