Skip to content

PySDK Release Notes

Version 0.15.2 (02/25/2025)

New Features and Modifications

  1. The post-processor for YOLOV8 Oriented Bounding Box (OBB) detection models is implemented. The post-processor tag is "DetectionYoloV8OBB". This complements the release of OBB model results drawing made in version 0.15.1.

  2. DLA_FALLBACK device type for TENSORRT runtime is renamed to DLA. Former DLA device type is deprecated and removed. Now DLA device type refers to an inference mode where the inference is conducted mainly on Deep Learning Accelerator (DLA) hardware with GPU fallback enabled.

Bug Fixes

  1. The following error message appears when making simultaneous inference of two or more models using Hailo Runtime: "HailoRT Runtime Agent: Failed to configure infer model, status = HAILO_INVALID_OPERATION".

Version 0.15.1 (02/21/2025)

New Features and Modifications

  1. PySDK cloud inference now performs automatic reconnection to the cloud inference server in case of critical errors. This greatly improves robustness of cloud inference in case of inference node failures: after reconnection the cloud inference server assigns another healthy node for the inference, while PySDK retries the inference of not yet processed frames on that new node.

  2. YOLO Detection postprocessor now supports models with no grid tensors and separate detection heads.

  3. image_overlay method of PySDK detection post-processor now supports drawing of oriented bounding box (OBB) model results. OBB detection results have additional "angle" key in the result dictionary for each detected object, which specifies the bounding box rotation angle in radians in clockwise direction.

Bug Fixes

  1. The error message "The input shape parameter InputShape for input #0 must have 4 elements, while it has <N>" is produced for models with tensor input type.

  2. degirum.list_models and degirum.zoo_manager.ZooManager.list_models methods did not filter out models not compatible with the connected inference engine for cloud model zoos when invoked with empty device, runitime or device_type arguments.

  3. Multiple fixes for Hailo runtime agent:

    • added support for additional tensor format orders introduced in HailoRT 4.20: models with NMS layers caused crashes in the previous release;
    • Hailo multi-process_service is used for inferences by default if it is running, otherwise local inference mode is used;
    • default page alignment for data buffers is changed from 4KB to 16KB to avoid errors on systems with 4KB PCI page settings.
  4. Multiple fixes for Brainchip Akida runtime agent:

    • updated device names to uppercase (e.g., NSoC_v1 → NSOC_V1);
    • implemented validation of input/output dimensions during model loading to provide clear error reporting.

Version 0.15.0 (01/30/2025)

New Features and Modifications

  1. Starting from this version, PySDK is not supported for Python 3.8.

  2. MemryX AI accelerators are initially supported for Linux OS. The runtime/device designator for these devices is "MEMRYX/MX3". Please refer to installation instructions for installation details.

  3. BrainChip AI accelerators are initially supported for Linux OS for Akida Runtime version 2.11.0. The runtime/device designators for these devices are "AKIDA/NSoC_v1", "AKIDA/NSoC_v2", "AKIDA/AKD1500_v1". Please refer to installation instructions for installation details.

  4. Device selection by device index is implemented for Hailo runtime agent. Now you can Hailo device(s) to be used for inference regular way by assigning degirum.model.Model.devices_selected property of your model object.

  5. ONNX runtime agent now supports ONNX runtime version 1.19.0 and prints the version in degirum sys-info command output.

  6. RKNN runtime agent now supports RKNN runtime version 2.3.0.

  7. OPENVINO runtime agent now supports OPENVINO runtime version 2024.6.0.

  8. HAILO runtime agent now supports HAILORT runtime version 4.20.

  9. The post-processor for YOLOV8 license place detection models is implemented. The post-processor tag is "DetectionYoloV8Plates".

  10. Error messages generated by Python post-processors now include the filename and line number where the error occurs. This should simplify debugging of Python post-processor code.

Bug Fixes

  1. degirum.connect performance is greatly improved for large cloud zoos. In the previous versions the whole content of the cloud zoo with all model parameters is downloaded from the cloud server. With 1000+ models public model zoo this process may take few seconds with slow Internet connection. Now the model parameters are downloaded on-demand, and degirum.connect downloads only model names.

Version 0.14.3 (12/26/2024)

New Features and Modifications

  1. Added support of OpenVINO version 2024.6.0.

  2. Added support of Intel Arc GPUs on Windows for OpenVINO runtime.

  3. Dropped support for OpenVINO versions 2022.1.1, 2023.2.0.

  4. InputShape model configuration parameter is supported for models with "Image" input type. Now you may specify input tensor shape in one line "InputShape": [<N>, <H>>, <W>, <C>] instead of providing four lines with four InputN/H/W/C individual parameters.

  5. Default values for the following model configuration parameters are changed:

    Model Parameter Old Default New Default
    OutputConfThreshold 0.1 0.3
    MaxDetections 20 100
    MaxClassesPerDetection 30 1
  6. Maximum supported model parameters configuration version is increased from 9 to 10.

  7. Separate output for pose keypoint heads is supported in pose detection post-processor.

  8. YOLOv8 instance segmentation post-processor is integrated.

Bug Fixes

  1. Numerous critical bugs have been fixed in Hailo runtime plugin.

    Note: For Hailo plugin users it is strongly recommended to upgrade PySDK to version 0.14.3

  2. The following error may appear intermittently on systems NOT equipped with ORCA USB: "libusb: error [cache_config_descriptors] could not access configuration descriptor 0 (actual) for 'USB\VID_0627&PID_0001*': [995] The I/O operation has been aborted because of either a thread exit or an application request."

  3. AI Server configured with HTTP protocol may crash intermittently with Access-Violation/SEGFAULT error when model inference with erroneous configuration is requested.


Version 0.14.2 (12/16/2024)

New Features and Modifications

  1. Hailo AI accelerators are initially supported for Linux OS. The runtime/device designators for these devices are "HAILORT/HAILO8" and "HAILORT/HAILO8L". Please refer to installation instructions for installation details.

  2. TensorRT runtime agent changes:

    • supported TensorRT version is updated from 8.5 to 10.6;
    • added support for x86-64 architecture.
  3. ORCA1 firmware version 1.1.22 is included in this release. It contains numerous bug fixes targeted to improve reliability of ORCA1 USB operation.

  4. Pose detection postprocessor now supports models with multiple classes.

Bug Fixes

  1. Various bug fixes related to ORCA1 USB device operation.

  2. AI server memory monitoring of Python postprocessor execution sometimes gives false-positives and prevents normal execution of Python postprocessors.


Version 0.14.1 (11/04/2024)

New Features and Modifications

  1. ORCA1 firmware version 1.1.21 is included in this release. It contains numerous bug fixes targeted to improve reliability of ORCA1 USB operations.

  2. Object blurring option is implemented in PySDK object detection results renderer. To control blurring you use degirum.model.Model.overlay_blur property of the Model class.

    • Assign None to disable blurring (this is the default value): model.overlay_blur = None.
    • To enable blurring of all detected bounding boxes, assign "all" string : model.overlay_blur = "all".
    • To enable blurring of bounding boxes belonging to a particular class, assign that class label string: : model.overlay_blur = "car".
    • To enable blurring of bounding boxes belonging to particular class list, assign a list of class label strings: model.overlay_blur = ["car", "person"].
  3. YOLOv8 postprocessor now supports both normalized and regular bounding boxes. It automatically infers that the boxes are normalized or not, and if they are normalized to unity, the boxes are adjusted to the images size. Note that the box outputs are typically normalized for TFLite models, while ONNX models usually do not provide normalization.

Bug Fixes

  1. The error message similar to this "Shape of tensor passed as the input #0 does not match to model parameters. Expected tensor shape is (1, 0, 0, 77)" appears when performing AI server inference of a model with Tensor input type of fewer than 4 dimensions, when those dimensions are specified using InputN/InputW/InputH/InputC model parameters, for example, InputN: 1, InputC: 77. The error does not appear when dimensions are specified using InputShape model parameter.

Version 0.14.0 (10/13/2024)

New Features and Modifications

  1. ORCA1 firmware version 1.1.19 is included in this release. It contains numerous bug fixes targeted to improve reliability of ORCA1 operation.

  2. Robust and secure Python postprocessor execution framework is implemented for AI server. Now all Python postprocessor code is executed in separate process pool in sandboxed environments as opposed to in-process execution in previous PySDK versions.

  3. Device validation is implemented when you try to load a model from a cloud model zoo and the inference device requested by that model is not available. In such case the following exception is raised: "Model '{model}' does not have any supported runtime/device combinations that will work on this system."

  4. timing attribute is added to the inference result base class degirum.postprocessor.InferenceResults. This attribute is populated with the inference timing information when degirum.model.Model.measure_time property is set to True. The inference timing information is represented as a dictionary with the same keys as returned by degirum.model.Model.time_stats() method.

Bug Fixes

  1. degirum.model.Model.output_class_set class label filtering is not applied when any degirum_tools analyzers are attached to the model object by degirum_tools.attach_analyzers().

  2. Significant (100x) performance drop of TFLITE/CPU model inference when more than one virtual CPU device is selected for the inference (which is default condition).


Version 0.13.4 (9/21/2024)

New Features and Modifications

  1. AMD Vitis NPU is initially supported for Windows OS. The runtime/device designator for this device is "ONNX/VITIS_NPU".

  2. Variable number of landmarks is supported in pose detection postprocessor. This is needed to support new face keypoints recognition models.

  3. AI server ASIO protocol is improved to disconnect client in case of aborted inference without waiting for inference timeout.


Version 0.13.3 (9/12/2024)

New Features and Modifications

  1. ORCA1 firmware version 1.1.18 is included in this release. This firmware improves the mechanism of detection of DDR4 external memory link failures.

  2. The error handling of critical ORCA hardware errors is improved: when such error is diagnosed during the inference, ORCA firmware is reloaded, ORCA is reinitialized, and the inference is retried once. If such retry succeeds, the error is not reported.

  3. The performance of HWC -> CHW conversion in AI server pre-processor is improved. This affects inference speed of ONNX models with NCHW input tensor layouts.

  4. The post-processor for YOLOv10 object detection models is implemented. The post-processor tag is "DetectionYoloV10".

  5. cache-dump subcommand is added to server command of PySDK CLI. This subcommand queries the current state od AI server runtime agent cache. Usage example: degirum server cache-dump --host <hostname>

  6. AI server tracing to stdout is implemented. To enable tracing, put __TraceToStdout=yes trace configuration option into dg_trace.ini trace configuration file. Traces will be printed to stdout in JSON format, compatible with log collection services such as DataDog, Loki/Grafana, and Elastic/Kibana. To enable tracing for all AI server events, additionally put AIServer=Detailed trace configuration option into dg_trace.ini trace configuration file.

    > *Note*: `dg_trace.ini` trace configuration file is located in `~/.local/share/DeGirum/trace` directory
    for Linux systems, and in `%APPDATA%\DeGirum\traces` folder for Windows systems. If it is not there, you just
    create it.
    

Bug Fixes

  1. When cloud server responds with cloud inference error details, the detailed message is not included into the text of the raised exception.

Version 0.13.2 (7/26/2024)

Bug Fixes

  1. N2X runtime agent fails to load on Linux systems when /dev/bus/usb device is not available. This leads to inability to use N2X/ORCA1 and N2X/CPU inference devices on such systems. This problem affects PySDK installations running on virtual machines and inside Docker images started in non-privileged mode.

Version 0.13.1 (7/17/2024)

New Features and Modifications

  1. Added support of OpenVINO version 2024.2.0.

  2. YOLO segmentation model postprocessing support is implemented in degirum.postprocessor.DetectionResults class.

  3. degirum version command is added to PySDK CLI. Using this command you may obtain PySDK version.

  4. degirum.zoo_manager.ZooManager.system_info() method added. This method queries the system info dictionary of the attached inference engine. The format of this dictionary is the same as the output of degirum sys-info command.

  5. Now to access the DeGirum public cloud model zoo there is no need to use cloud API token. So, the following code will just work:

    ```python
    import degirum as dg
    zoo = dg.connect(dg.CLOUD)
    zoo.list_models()
    ```
    
  6. ORCA1 firmware version 1.1.15 is included in this release. This firmware implements measures to reinitialize DDR4 external memory link in case of failures. This reduces the probability of runtime errors such as "Timeout waiting for RPC EXEC completion".

  7. ORCA1 firmware is now loaded on AI server startup only in case of version mismatch or previously detected critical hardware error. In previous AI server versions it was reloaded unconditionally on every start.

  8. degirum.model.Model.device_type property now can be assigned for single-device models (models for which SupportedDeviceTypes model parameter is not defined). In previous PySDK versions such assignment always generated an error "Model does not support dynamic device type selection: model property SupportedDeviceTypes is not defined".

Bug Fixes

  1. Google EdgeTPU AI accelerator support was broken in PySDK ver. 0.13.0. Now it is restored.

Version 0.13.0 (6/21/2024)

New Features and Modifications

  1. Plugin for RKNN runtime is initially supported. This plugin allows performing inferences of .rknn AI models on RockChip AI accelerators, including:

    - RK3588
    - RK3568
    - RK3568
    
  2. TFLite plugin now supports the following inference delegates:

    • NXP VX
    • NXP Ethos-U
    • ArmNN
  3. The device_type keyword argument is added to degirum.zoo_manager.ZooManager.list_models method. It specifies the filter for target runtime/device combinations: the string or list of strings of full device type names in "RUNTIME/DEVICE" format. For example, the following code will return the list of models for N2X/ORCA1 runtime/device pair:

    ```python
    model_list = zoo.list_models(device_type = "N2X/ORCA1")
    ```
    
  4. New functions have been added to PySDK top-level API:

    • degirum.list_models()
    • degirum.load_model()
    • degirum.get_supported_devices()

    These functions are intended to further simplify PySDK API.

    The function degirum.list_models() allows you to request the list of models without explicitly obtaining ZooManager object via degirum.connect() call. It combines the arguments of degirum.connect() and degirum.zoo_manager.ZooManager.list_models() which appear one after another, for example:

    list = degirum.list_models(
        degirum.CLOUD,
        "https://hub.degirum.com",
        "<token>",
        device_type="N2X/ORCA1"
    )
    

    The function degirum.load_model() allows you to load the model without explicitly obtaining ZooManager object via degirum.connect() call. It combines the arguments of degirum.connect() and degirum.zoo_manager.ZooManager.load_model(), model name goes first. For example:

    model = degirum.load_model(
        "mobilenet_v2_ssd_coco--300x300_quant_n2x_orca_1",
        degirum.CLOUD,
        "https://hub.degirum.com",
        "<token>",
        output_confidence_threshold=0.5,
    )
    

    The function degirum.get_supported_devices() allows you to obtain the list of runtime/device combinations supported by the inference engine of your choice. It accepts the inference engine designator as a first argument. It returns the list of supported device type strings in a form "/". For example, the following call requests the list of runtime/device combinations supported by the AI server on localhost:

    supported_device_types = degirum.get_supported_devices("localhost")
    
  5. The post-processor for YOLOv8 pose detection models is implemented. The post-processor tag is "PoseDetectionYoloV8".

  6. Pre-processor letter-boxing implementation is changed to match Ultralytics implementation for better mAP match.

  7. ORCA firmware loading time is reduced by 3 seconds.

Bug Fixes

  1. "Timeout 10000 ms waiting for response from AI server" error may happen intermittently at the inference start of a cloud model on AI server, when AI server has unreliable connection to the Internet due to incorrect timeouts on the client side.

  2. Model filtering functionality of degirum.zoo_manager.ZooManager.list_models method works incorrectly with multi-device models having device wildcards in SupportedDeviceTypes. For example, if the model has SupportedDeviceTypes: "OPENVINO/*", then the call zoo.list_models(device="ORCA1") returns such model despite "ORCA1" device is not supported by "OPENVINO" runtime.