LogoLogo
AI HubCommunityWebsite
  • Start Here
  • AI Hub
    • Overview
    • Quickstart
    • Teams
    • Device Farm
    • Browser Inference
    • Model Zoo
      • Hailo
      • Intel
      • MemryX
      • BrainChip
      • Google
      • DeGirum
      • Rockchip
    • View and Create Model Zoos
    • Model Compiler
    • PySDK Integration
  • PySDK
    • Overview
    • Quickstart
    • Installation
    • Runtimes and Drivers
      • Hailo
      • OpenVINO
      • MemryX
      • BrainChip
      • Rockchip
      • ONNX
    • PySDK User Guide
      • Core Concepts
      • Organizing Models
      • Setting Up an AI Server
      • Loading an AI Model
      • Running AI Model Inference
      • Model JSON Structure
      • Command Line Interface
      • API Reference Guide
        • PySDK Package
        • Model Module
        • Zoo Manager Module
        • Postprocessor Module
        • AI Server Module
        • Miscellaneous Modules
      • Older PySDK User Guides
        • PySDK 0.16.0
        • PySDK 0.15.2
        • PySDK 0.15.1
        • PySDK 0.15.0
        • PySDK 0.14.3
        • PySDK 0.14.2
        • PySDK 0.14.1
        • PySDK 0.14.0
        • PySDK 0.13.4
        • PySDK 0.13.3
        • PySDK 0.13.2
        • PySDK 0.13.1
        • PySDK 0.13.0
    • Release Notes
      • Retired Versions
    • EULA
  • DeGirum Tools
    • Overview
      • Streams
        • Streams Base
        • Streams Gizmos
      • Compound Models
      • Result Analyzer Base
      • Inference Support
  • DeGirumJS
    • Overview
    • Get Started
    • Understanding Results
    • Release Notes
    • API Reference Guides
      • DeGirumJS 0.1.3
      • DeGirumJS 0.1.2
      • DeGirumJS 0.1.1
      • DeGirumJS 0.1.0
      • DeGirumJS 0.0.9
      • DeGirumJS 0.0.8
      • DeGirumJS 0.0.7
      • DeGirumJS 0.0.6
      • DeGirumJS 0.0.5
      • DeGirumJS 0.0.4
      • DeGirumJS 0.0.3
      • DeGirumJS 0.0.2
      • DeGirumJS 0.0.1
  • Orca
    • Overview
    • Benchmarks
    • Unboxing and Installation
    • M.2 Setup
    • USB Setup
    • Thermal Management
    • Tools
  • Resources
    • External Links
Powered by GitBook

Get Started

  • AI Hub Quickstart
  • PySDK Quickstart
  • PySDK in Colab

Resources

  • AI Hub
  • Community
  • DeGirum Website

Social

  • LinkedIn
  • YouTube

Legal

  • PySDK EULA
  • Terms of Service
  • Privacy Policy

© 2025 DeGirum Corp.

On this page
  • degirum.model.Model
  • custom_postprocessor
  • device_type
  • devices_available
  • devices_selected
  • eager_batch_size
  • frame_queue_depth
  • image_backend
  • inference_timeout_s
  • input_crop_percentage
  • input_image_format
  • input_letterbox_fill_color
  • input_numpy_colorspace
  • input_pad_method
  • input_resize_method
  • input_shape
  • label_dictionary
  • measure_time
  • model_info
  • non_blocking_batch_predict
  • output_class_set
  • output_confidence_threshold
  • output_max_classes_per_detection
  • output_max_detections
  • output_max_detections_per_class
  • output_nms_threshold
  • output_pose_threshold
  • output_postprocess_type
  • output_top_k
  • output_use_regular_nms
  • overlay_alpha
  • overlay_blur
  • overlay_color
  • overlay_font_scale
  • overlay_line_width
  • overlay_show_labels
  • overlay_show_probabilities
  • save_model_image
  • supported_device_types
  • __call__(data)
  • __enter__
  • __exit__(exc_type, ...)
  • __init__(model_name, ...)
  • predict(data)
  • predict_batch(data)
  • predict_dir(path, ...)
  • reset_time_stats
  • time_stats

Was this helpful?

  1. PySDK
  2. PySDK User Guide
  3. API Reference Guide

Model Module

PreviousPySDK PackageNextZoo Manager Module

Last updated 14 days ago

Was this helpful?

degirum.model.Model

Bases: ABC

Model class. Handles whole inference lifecycle for a single model: input data preprocessing, inference, and postprocessing.

Note

You never construct model objects yourself – instead you call method to create instances for you.

custom_postprocessor

degirum.model.Model.custom_postprocessor

property writable

Custom postprocessor class. When not None, the object of this class is returned as inference result. Such custom postprocessor classes must be inherited from class.

device_type

degirum.model.Model.device_type

property writable

The type of the device to be used for model inference in a format <runtime>/<device>

Setter accepts either a string which specifies single device in a format <runtime>/<device> or it can be a list of such strings. In this case the first supported device type from the list will be selected.

Getter returns currently selected device type.

devices_available

degirum.model.Model.devices_available

abstractmethod property

The list of inference device indices which can be used for model inference.

devices_selected

degirum.model.Model.devices_selected

property writable

The list of inference device indices selected for model inference.

eager_batch_size

degirum.model.Model.eager_batch_size

property writable

The size of the batch (number of consecutive frames before this model is switched to another model during batch predict) to be used by device scheduler when inferencing this model.

frame_queue_depth

degirum.model.Model.frame_queue_depth

property writable

The depth of the model prediction queue. When the queue size reaches this value, the next prediction call will block until there will be space in the queue.

image_backend

degirum.model.Model.image_backend

property writable

Graphical library (backend) to use for graphical tasks – one of 'pil', 'opencv', 'auto'

'auto' means try OpenCV first, if not installed, try PIL.

inference_timeout_s

degirum.model.Model.inference_timeout_s

property writable

The maximum time in seconds to wait for inference result from the model.

input_crop_percentage

degirum.model.Model.input_crop_percentage

property writable

Percentage of image to crop around. Valid range: [0..1].

input_image_format

degirum.model.Model.input_image_format

property writable

Defines the image format for model inputs of image type – one of 'JPEG' or 'RAW'.

input_letterbox_fill_color

degirum.model.Model.input_letterbox_fill_color

property writable

3-element RGB tuple.

input_numpy_colorspace

degirum.model.Model.input_numpy_colorspace

property writable

Input image colorspace – one of 'RGB', 'BGR', or 'auto'.

This parameter is used only to identify colorspace for NumPy arrays.

'auto' translates to 'BGR' for opencv backend, and to 'RGB' for pil backend.

input_pad_method

degirum.model.Model.input_pad_method

property writable

Input image pad method – one of 'stretch', 'letterbox', 'crop-first', or 'crop-last'.

  • In case of 'stretch', the input image will be resized to the model input size without preserving aspect ratio.

  • In case of 'letterbox', the input image will be resized to the model input size preserving aspect ratio.

  • In case of 'crop-first', the input image will be cropped to input_crop_percentage around the center and then resized.

  • In the case of 'crop-last', if the model's input dimensions are square, the image is resized with its smaller side matching the model dimension, preserving the aspect ratio. If the dimensions are rectangular, the image is resized and stretched to fit the model's input dimensions. After resizing, the image is cropped to the model's input dimensions and aspect ratio based on the 'input_crop_percentage' property.

input_resize_method

degirum.model.Model.input_resize_method

property writable

Input image resize method – one of 'nearest', 'bilinear', 'area', 'bicubic', or 'lanczos'.

input_shape

degirum.model.Model.input_shape

property writable

Input tensor shapes. List of tensor shapes per input.

Each element of that list is another list containing tensor dimensions, slowest dimension first:

  • if InputShape model parameter is specified, its value is used.

  • otherwise, all defined InputN/H/W/C model parameters are used as [InputN, InputH, InputW, InputC] list.

label_dictionary

degirum.model.Model.label_dictionary

abstractmethod property

Get model class label dictionary.

Each dictionary element is key-value pair, where the key is the class ID and the value is the class label string.

measure_time

degirum.model.Model.measure_time

property writable

Flag to enable measuring and collecting inference time statistics.

model_info

degirum.model.Model.model_info

property

Return model information object to provide read-only access to model parameters.

New deep copy is created each time.

non_blocking_batch_predict

degirum.model.Model.non_blocking_batch_predict

property writable

Flag to control the behavior of the generator object returned by predict_batch() method.

  • When the flag is set to True, the generator accepts None from the inference input data iterator object (passed as data parameter): If None is returned, the model predict step is skipped for this iteration. Also, when no inference results are available in the result queue at this iteration, the generator yields None result.

  • When the flag is set to False (default value), the generator does not allow None to be returned from the inference input data iterator object: If None is returned, an exception is raised. Also, when no inference results are available in the result queue at this iteration, the generator continues to the next iteration of the input data iterator.

  • Setting this flag to True allows using predict_batch() generator in a non-blocking manner, assuming the design of input data iterator object is also non-blocking, i.e., returning None when no data is available instead of waiting for the data. Every next element request from the generator will not block the execution waiting for either input data or inference results, returning None when no results are available.

output_class_set

degirum.model.Model.output_class_set

property writable

Labels filter: list of class labels/category IDs to be included in inference results.

Note

output_confidence_threshold

degirum.model.Model.output_confidence_threshold

property writable

Confidence threshold used in inference result post-processing.

Valid range: [0..1].

Only objects with scores higher than this threshold are reported.

Note

output_max_classes_per_detection

degirum.model.Model.output_max_classes_per_detection

property writable

Max Detections Per Class number used in inference result post-processing, and specifies the maximum number of highest probability classes per anchor to be processed during the non-max suppression process for fast algorithm.

Applicable only for detection models.

output_max_detections

degirum.model.Model.output_max_detections

property writable

Max Detection number used in inference result post-processing, and specifies the total maximum objects of number to be detected.

Applicable only for detection models.

output_max_detections_per_class

degirum.model.Model.output_max_detections_per_class

property writable

Max Detections Per Class number used in inference result post-processing, and specifies the maximum number of objects to keep during per class non-max suppression process for regular algorithm.

Applicable only for detection models.

output_nms_threshold

degirum.model.Model.output_nms_threshold

property writable

Non-Max Suppression (NMS) threshold used in inference result post-processing.

Valid range: [0..1].

Applicable only for models which utilize NMS algorithm.

output_pose_threshold

degirum.model.Model.output_pose_threshold

property writable

Pose detection threshold used in inference result post-processing.

Valid range: [0..1].

Applicable only for pose detection models.

output_postprocess_type

degirum.model.Model.output_postprocess_type

property writable

Inference result post-processing type.

You may set it to 'None' to bypass post-processing.

output_top_k

degirum.model.Model.output_top_k

property writable

The number of classes with highest scores to report for classification models.

output_use_regular_nms

degirum.model.Model.output_use_regular_nms

property writable

Use Regular NMS value used in inference result post-processing and specifies the algorithm to use for detection postprocessing.

If value is True, regular Non-Max suppression algorithm is used – NMS is calculated for each class separately and after that all results are merged.

If value is False, fast Non-Max suppression algorithm is used – NMS is calculated for all classes simultaneously.

overlay_alpha

degirum.model.Model.overlay_alpha

property writable

Alpha-blend weight for inference results drawing on overlay image.

float number in range [0..1].

overlay_blur

degirum.model.Model.overlay_blur

property writable

Overlay blur option.

None for no blur, "all" to blur all objects, a class label or list of class labels to blur specific objects.

overlay_color

degirum.model.Model.overlay_color

property writable

Color for inference results drawing on overlay image.

3-element RGB tuple or list of 3-element RGB tuples.

The overlay_color property is used to define the color to draw overlay details. In the case of a single RGB tuple, the corresponding color is used to draw all the overlay data: points, boxes, labels, segments, etc. In the case of a list of RGB tuples the behavior depends on the model type:

  • For classification models different colors from the list are used to draw labels of different classes.

  • For detection models different colors are used to draw labels and boxes of different classes.

  • For pose detection models different colors are used to draw keypoints of different persons.

  • For segmentation models different colors are used to highlight segments of different classes.

If the list size is less than the number of classes of the model, then overlay_color values are used cyclically, for example, for three-element list it will be overlay_color[0], then overlay_color[1], overlay_color[2], and again overlay_color[0].

overlay_font_scale

degirum.model.Model.overlay_font_scale

property writable

Font scale for inference results drawing on overlay image.

float positive number.

overlay_line_width

degirum.model.Model.overlay_line_width

property writable

Line width for inference results drawing on overlay image.

overlay_show_labels

degirum.model.Model.overlay_show_labels

property writable

Flag to enable/disable drawing class labels on overlay image.

overlay_show_probabilities

degirum.model.Model.overlay_show_probabilities

property writable

Flag to enable/disable drawing class probabilities on overlay image.

save_model_image

degirum.model.Model.save_model_image

property writable

Flag to enable/disable saving of model input image in inference results.

Model input image is the image converted to AI model input specifications as raw binary array.

supported_device_types

degirum.model.Model.supported_device_types

property

The list of supported device types in format <runtime>/<device> for this model.

__call__(data)

degirum.model.Model.__call__(data)

Perform whole inference lifecycle: input data preprocessing, inference and postprocessing.

__enter__

degirum.model.Model.__enter__()

Context manager enter handler.

__exit__(exc_type, ...)

degirum.model.Model.__exit__(exc_type, exc_val, exc_tb)

Context manager exit handler.

__init__(model_name, ...)

degirum.model.Model.__init__(model_name, model_params, supported_device_types)

Constructor.

Note

predict(data)

degirum.model.Model.predict(data)

Perform whole inference lifecycle: input data preprocessing, inference, and postprocessing.

Parameters:

Name
Type
Description
Default

data

any

Inference input data. Input data type depends on the model.

  • If the model expects image data, then the input data is either:

    • Input image path string.

    • NumPy 3D array of pixels in a form HWC. where color dimension is native to selected graphical backend (RGB for 'pil' and BGR for 'opencv' backend)

    • PIL.Image object (only for 'pil' backend).

  • If the model expects audio data, then the input data is NumPy 1D array with audio data samples.

  • If the model expects raw tensor data, then the input data is NumPy multidimensional array with shape matching model input.

  • In case of multi-input model a list of elements of the supported data type is expected.

required

Returns:

Type
Description

Inference result object, which allows you to access inference results as a dictionary or as an overlay image if it is supported by the model. For your convenience, all image coordinates in case of detection models are converted from model coordinates to original image coordinates.

predict_batch(data)

degirum.model.Model.predict_batch(data)

Perform whole inference lifecycle for all objects in given iterator object (for example, list).

Parameters:

Name
Type
Description
Default

data

iterator

Inference input data iterator object such as list or generator function.

Each element returned by this iterator can be one of the following:

  • A single input data object, in case of single-input model.

  • A list of input data objects, in case of multi-input model.

  • A tuple containing a pair of input data object or a list of input data objects as a first element and frame info object as a second element of the tuple.

The input data object type depends on the model.

  • If the model expects image data, then the input data object is either:

    • Input image path string.

    • NumPy 3D array of pixels in a form HWC, where color dimension is native to selected graphical backend (RGB for 'pil' and BGR for 'opencv' backend).

    • PIL.Image object (only for 'pil' backend).

  • If the model expects audio data, then the input data object is NumPy 1D array with audio data samples.

  • If the model expects raw tensor data, then the input data object is NumPy multidimensional array with shape matching model input.

The frame info object is passed to the inference result object unchanged and can be accessed via info property of the inference result object.

required

Returns:

Type
Description

Example

    for result in model.predict_batch(['image1.jpg','image2.jpg']):
        print(result)

predict_dir(path, ...)

degirum.model.Model.predict_dir(path, *, recursive=False, extensions=['.jpg', '.jpeg', '.png', '.bmp'])

Perform whole inference lifecycle for all files from specified directory matching given file extensions.

Supports only single-input models.

Parameters:

Name
Type
Description
Default

path

str

Directory name containing files to be processed.

required

recursive

bool

True to recursively walk through all subdirectories in a directory. Default is False.

False

extensions

list[str]

Single string or list of strings containing file extension(s) to process.

['.jpg', '.jpeg', '.png', '.bmp']

Returns:

Type
Description

Example

    for result in model.predict_dir('some_path'):
        print(result)

reset_time_stats

degirum.model.Model.reset_time_stats()

Reset inference time statistics.

time_stats

degirum.model.Model.time_stats()

Query inference time statistics.

Returns:

Type
Description

dict

Dictionary containing time statistic objects.

  • A key in that dictionary is a string description of a particular inference step.

Supported device types can be obtained by property.

Image fill color in case of 'letterbox' padding (see property for details).

The voids will be filled with solid color specified by input_letterbox_fill_color property. In all cases property specifies the algorithm for resizing.

Call to query accumulated inference time statistics.

You can use property to obtain a list of model classes.

For classification models if parameter is set to non-zero value, then it supersedes this threshold -- highest score classes are always reported.

When set to 0, then report all classes with scores greater than .

See for more details.

The default value of overlay_color is a single RBG tuple of yellow color for all model types except segmentation models. For segmentation models it is the list of RGB tuples with the list size equal to the number of model classes. You can use property to obtain a list of model classes. Each color is automatically assigned to look pretty and different from other colors in the list.

See for more details.

See for more details.

See for more details.

See for more details.

Same as .

You never construct model objects yourself – instead you call method to create instances for you.

Such iterator object should return the same object types which regular method accepts.

Iterator[]

Generator object which iterates over inference result objects. This allows you directly using the result of in for loops.

Iterator[]

Generator object to iterate over inference result objects. This allows you directly using the result of in for loops.

method will return empty dictionary after this call.

Each statistic object keeps min, max, and average values in milliseconds, accumulated over all inferences performed on this model since the model creation of last call of statistic reset method .

Time statistics are accumulated only when property is set to True.

degirum.model.Model.supported_device_types
degirum.model.Model.input_pad_method
degirum.model.Model.input_resize_method
degirum.model.Model.time_stats
degirum.model.Model.label_dictionary
degirum.model.Model.output_top_k
degirum.model.Model.output_top_k
degirum.model.Model.output_confidence_threshold
degirum.model.Model.label_dictionary
degirum.model.Model.predict
degirum.model.Model.predict
degirum.model.Model.time_stats
degirum.model.Model.predict_batch
degirum.model.Model.predict_dir
degirum.model.Model.reset_time_stats
degirum.model.Model.measure_time
degirum.model.Model
degirum.model.Model
degirum.zoo_manager.ZooManager.load_model
degirum.zoo_manager.ZooManager.load_model

This API Reference is based on PySDK 0.16.1.

degirum.postprocessor.InferenceResults
degirum.postprocessor.InferenceResults.image_overlay
degirum.postprocessor.InferenceResults.image_overlay
degirum.postprocessor.InferenceResults.image_overlay
degirum.postprocessor.InferenceResults.image_overlay
degirum.postprocessor.InferenceResults.image_overlay
InferenceResults
InferenceResults
InferenceResults