Loading an AI Model

This is an end-to-end guide for loading a model. You'll start with connecting to an inference engine and model zoo, learn about filtering model lists, then loading a model.

Connect to an Inference Engine and Model Zoo

The degirum.connect() function is the starting point for interacting with PySDK. It establishes a connection with the appropriate AI inference engine and model zoo based on the configuration you provide.

degirum.connect(
    inference_host_address = dg.CLOUD,
    zoo_url = "org_name/zoo_name", 
    token = "<your token>"
)

When you call degirum.connect(), you will:

  • Specify an inference host to run AI models

  • Specify a model zoo from which AI models can be loaded

  • Authenticate with the AI Hub using a token

degirum.connect() creates and returns a ZooManager object. This object enables:

  • Searching for models available in the connected model zoo.

  • Loading AI models and creating appropriate AI model handling objects for inference.

  • Accessing model parameters to customize inference behavior.

degirum.connect()

degirum.connect() takes three parameters: inference_host_address, zoo_url, and token. These parameters define where the inference will be run, what zoo will be used, and the token used for AI Hub authentication if an AI Hub model zoo will be used.

# Function Signature: degirum.connect() 
degirum.connect(inference_host_address, zoo_url=None, token=None)

The following table lists all possible combinations of inference and zoo types with degirum.connect():

Inference Type
Model Zoo Type
Usage Example

AI Hub

Default: AI Hub Public Model Zoo (degirum/public)

degirum.connect(
    inference_host_address = dg.CLOUD,  
    token = "<your token>"
)

AI Hub

Specified AI Hub Model Zoo

degirum.connect(
    inference_host_address = dg.CLOUD, 
    zoo_url = "org_name/zoo_name", 
    token = "<your token>"
)

AI Server

Default: Local Folder

degirum.connect(
    inference_host_address = "host:port" 
)

AI Server

Specified AI Hub Model Zoo

degirum.connect(
    inference_host_address = "host:port", 
    zoo_url = "org_name/zoo_name",
    token = "<your token>"
)

Local

Default: AI Hub Public Model Zoo (degirum/public)

degirum.connect(
    inference_host_address = dg.LOCAL 
)

Local

Specified AI Hub Model Zoo

degirum.connect(
    inference_host_address = dg.LOCAL, 
    zoo_url = "org_name/zoo_name",
    token = "<your token>"
)

Local

Local Folder

degirum.connect(
    inference_host_address = dg.LOCAL, 
    zoo_url = "/path/to/zoo/dir"
)

Local

Local File

degirum.connect(
    inference_host_address = dg.CLOUD, 
    zoo_url = "/path/to/model.json"
)

Your AI Hub token is needed only with AI Hub Inference or private model zoos.

Retrieve Supported Devices, Filter Models, then Load Models

ZooManager.supported_device_types()

The ZooManager.supported_device_types() method returns a list of runtime and device combinations (in "RUNTIME/DEVICE" format) that the connected inference engine supports.

Example:

import degirum as dg

# Set your inference host address, model zoo, and token in these variables.
your_host_address = dg.CLOUD # Can be dg.CLOUD, host:port, or dg.LOCAL
your_model_zoo = "degirum/public"
your_token = "<token>"

# Connect to DeGirum Application Server and an AI Hub model zoo
inference_manager = dg.connect(
    inference_host_address = your_host_address, 
    zoo_url = your_model_zoo, 
    token = your_token
)
supported_types = inference_manager.supported_device_types()
print(supported_types)

Output:

['N2X/ORCA1', 'TFLITE/EDGETPU', 'OPENVINO/CPU']

In this example, the inference engine returns a list of supported device types on the application server. In thise case, it's the application server hosted on dg.CLOUD.

ZooManager.list_models()

After obtaining a ZooManager object, you can use the ZooManager.list_models() method to retrieve and filter the list of available AI models.

# Method Signature: ZooManager.list_models()
degirum.zoo_manager.ZooManager.list_models(*args, **kwargs)

This method:

  • Filters models based on various criteria such as model family, runtime, device type, precision, postprocessor type, and more.

  • Returns a list of model names that can be used later when loading models for inference.

Use Cases

  • Exploring available model families (e.g., mobilenet, YOLO).

  • Filtering models based on target hardware.

  • Selecting models for specific precision or density.

Example Usage

import degirum as dg

# Set your inference host address, model zoo, and token in these variables.
your_host_address = dg.CLOUD # Can be dg.CLOUD, host:port, or dg.LOCAL
your_model_zoo = "degirum/public"
your_token = "<token>"

# Connect to DeGirum Application Server and an AI Hub model zoo
inference_manager = dg.connect(
    inference_host_address = your_host_address, 
    zoo_url = your_model_zoo, 
    token = your_token
)
model_list = inference_manager.list_models(device_type=["OPENVINO/CPU"])
print(model_list)

In this example, the inference engine returns a list of models that run using the OpenVINO runtime on a CPU.

Available Filtering Parameters and Their Sources

The ZooManager.list_models() method filters models based on information retrieved from the model name string and also the model JSON fields.

The model JSON file specifies the RuntimeAgent, DeviceType, and SupportedDeviceTypes fields.

Parameter
Possible Values
Source of Information

model_family

Any valid substring like "yolo", "mobilenet"

Extracted from the model name

precision

"quant" (quantized model), "float" (floating-point model)

Inferred from the presence of precision-related fields in the model name

pruned

"dense" (dense model), "pruned" (sparse/pruned model)

Determined from suffixes indicating density in the model name (e.g., "pruned" or "dense")

runtime

Combines information from "RuntimeAgent" and "SupportedDeviceTypes"

device

Combines information from "DeviceType" and "SupportedDeviceTypes"

device_type

Extracted from the "SupportedDeviceTypes"

Combine list_models() with supported_device_types() to Find Supported Models

To find only the models that are compatible with the current inference engine, you can use the supported_device_types()method as a filter for list_models().

Example:

import degirum as dg

# Set your inference host address, model zoo, and token in these variables.
your_host_address = dg.CLOUD # Can be dg.CLOUD, host:port, or dg.LOCAL
your_model_zoo = "degirum/public"
your_token = "<token>"

# Connect to DeGirum Application Server and an AI Hub model zoo
inference_manager = dg.connect(
    inference_host_address = your_host_address, 
    zoo_url = your_model_zoo, 
    token = your_token
)

# Retrieve supported device types
supported_types = inference_manager.supported_device_types()

# List models that match any supported runtime/device combination
supported_models = inference_manager.list_models(device_type=list(supported_types))

print("Models supported by the current inference engine:")
for model in supported_models:
   print(model)

ZooManager.load_model()

Once you have obtained supported models from filtering list_models() with supported_device_types() methods, you can load all of the resulting models for inference using the degirum.zoo_manager.ZooManager.load_model() method.

Basic Usage

To load a model, pass the model name string as the model_name argument to load_model().

Example:

import degirum as dg

# Set your inference host address, model zoo, and token in these variables.
your_host_address = dg.CLOUD # Can be dg.CLOUD, host:port, or dg.LOCAL
your_model_zoo = "degirum/public"
your_token = "<token>"

# Connect to DeGirum Application Server and an AI Hub model zoo
inference_manager = dg.connect(
    inference_host_address = your_host_address, 
    zoo_url = your_model_zoo, 
    token = your_token
)

# Retrieve supported device types
supported_types = inference_manager.supported_device_types()

# List models that match any supported runtime/device combination
supported_models = inference_manager.list_models(device_type=list(supported_types))

# Load the first model from the list
model = inference_manager.load_model(model_name=supported_models[0])
  • If a model with the specified name is found, the method returns a degirum.model.Model object that you can use to run inference.

  • If the model is not found, an exception will be raised.

Passing Model Properties as Arguments

You can pass additional model properties as keyword arguments to customize the behavior of the loaded model. These properties are directly assigned to the model object.

Example:

import degirum as dg

# Set your inference host address, model zoo, and token in these variables.
your_host_address = dg.CLOUD # Can be dg.CLOUD, host:port, or dg.LOCAL
your_model_zoo = "degirum/public"
your_token = "<token>"

# Connect to DeGirum Application Server and an AI Hub model zoo
inference_manager = dg.connect(
    inference_host_address = your_host_address, 
    zoo_url = your_model_zoo, 
    token = your_token
)

# Retrieve supported device types
supported_types = inference_manager.supported_device_types()

# List models that match any supported runtime/device combination
supported_models = inference_manager.list_models(device_type=list(supported_types))

# Load the first model from the list
model = inference_manager.load_model(
    model_name=supported_models[0],
    output_confidence_threshold=0.5, 
    input_pad_method="letterbox"
)
print(model)

In this example:

  • output_confidence_threshold=0.5 sets a confidence threshold for inference results.

  • input_pad_method="letterbox" specifies the padding method to maintain the input aspect ratio.

Convenience Functions

degirum.get_supported_devices()

You can retrieve supported device types using the degirum.get_supported_devices() function. This function combines the arguments of both degirum.connect() and degirum.zoo_manager.ZooManager.get_supported_devices(), allowing you to list supported devices with a single call.

Function Signature:

degirum.get_supported_devices(inference_host_address, zoo_url='', token='')

Example:

import degirum as dg

# Set your inference host address, model zoo, and token in these variables.
your_host_address = dg.CLOUD # Can be dg.CLOUD, host:port, or dg.LOCAL
your_model_zoo = "degirum/public"
your_token = "<token>"

# Retrieve supported device types
supported_devices = dg.get_supported_devices(
    inference_host_address = your_host_address, 
    zoo_url = your_model_zoo, 
    token = your_token
)
print(supported_devices)

degirum.list_models()

You can retrieve the list of models using the degirum.list_models() function. This function combines the arguments of both degirum.connect() and degirum.zoo_manager.ZooManager.list_models(), allowing you list models with a single call.

Function Signature:

degirum.list_models(inference_host_address, zoo_url, token=None, **kwargs)

Example:

import degirum as dg

# Set your inference host address, model zoo, and token in these variables.
your_host_address = dg.CLOUD # Can be dg.CLOUD, host:port, or dg.LOCAL
your_model_zoo = "degirum/public"
your_token = "<token>"

# List models from the AI Hub model zoo with specific filtering criteria
model_list = dg.list_models(
    inference_host_address = your_host_address, 
    zoo_url = your_model_zoo, 
    token = your_token, 
    device_type=["OPENVINO/CPU"]
)
print(model_list)

In this example, the method connects to the specified model zoo and returns a list of models that run with the OpenVINO runtime on a CPU.

degirum.load_model()

For convenience, you can directly load a model without explicitly obtaining a ZooManager object with degirum.connect(). The degirum.load_model() function combines the arguments of degirum.connect() and ZooManager.load_model(), allowing you to load models with a single call.

Function Signature:

degirum.load_model(model_name, inference_host_address, zoo_url=None, token=None, **kwargs)

Example:

import degirum as dg

# Set your inference host address, model zoo, and token in these variables.
your_host_address = dg.CLOUD # Can be dg.CLOUD, host:port, or dg.LOCAL
your_model_zoo = "degirum/public"
your_token = "<token>"

# Retrieve supported device types
supported_devices = dg.get_supported_devices(
    inference_host_address = your_host_address, 
    zoo_url = your_model_zoo, 
    token = your_token, 
)

# List models from the AI Hub model zoo with specific filtering criteria
model_list = dg.list_models(
    inference_host_address = your_host_address, 
    zoo_url = your_model_zoo, 
    token = your_token,
    device_type=list(supported_devices)
)

# Load the first model from the list with some optional parameters
model = dg.load_model(
    model_name = list(model_list)[0], 
    inference_host_address = your_host_address, 
    zoo_url = your_model_zoo, 
    token = your_token,
    overlay_show_probabilities = true
    output_confidence_threshold = 0.5
)
print(model)

In this example, the code gets supported devices, lists models, then loads a model based on the filtered list with a specified parameter.

Minimum Code Example

After you master all above steps, you can distill your code into just a few lines.

import degirum as dg

# Declaring variables
# Set your model name, inference host address, model zoo, and AI Hub token.
your_model_name = "model-name"
your_host_address = dg.CLOUD # Can be dg.CLOUD, host:port, or dg.LOCAL
your_model_zoo = "degirum/public"
your_token = "<token>"

# Loading a model
model = dg.load_model(
    model_name = your_model_name, 
    inference_host_address = your_host_address, 
    zoo_url = your_model_zoo, 
    token = your_token 
    # optional parameters, such as overlay_show_probabilities = True
)
print(model)

Last updated

Was this helpful?