Skip to content

PySDK 0.9.2

Release Date: 09/06/2023

New Features and Modifications

  1. Plugin for ONNX runtime is initially supported. This plugin allows performing inferences of ONNX AI models directly on AI server host CPU without any AI accelerator.

    The ONNX runtime delivers better performance compared to OpenVINO™ runtime on ARM64 platforms, while OpenVINO&trade runtime delivers better performance than ONNX runtime on x86-64 platforms.

  2. Default values for some model properties are changed. The following is the list of changes:

    • degirum.model.Model.input_image_format:

      Was: "JPEG" for cloud inference, "RAW" for all other inference types

      Now: "RAW" for local inference, "JPEG" for all other inference types

    • degirum.model.Model.input_numpy_colorspace:

      Was: "RGB"

      Now: "auto", meaning it will be "BGR" for OpenCV backend and "RGB" for PIL backend

  3. The meaning of "auto" selection for degirum.model.Model.image_backend property has changed:

    Was: try to use PIL first, and if not installed, use OpenCV

    Now: try to use OpenCV first, and if not installed, use PIL

  4. AI server protocol is improved for robustness. To deal with unreliable network connection, the following retries have been implemented:

    • Retry on a client side when connecting to a server

    • Retry on a server side when connecting to a cloud zoo

  5. In-memory cache size limiting and cache eviction mechanism is implemented for AI server model cache. This greatly improves AI server robustness in case when multiple models were requested for inference during AI server lifetime: loading too many different models caused host memory exhaustion and AI server crash.

Bug Fixes

  1. If a cloud model zoo has capital letters in its name, there was no possibility to load models from such zoo. The following error message appeared in such case:

    DegirumException: Model zoo 'myorg/ZooWithCaps' is not found. (cloud server response: 400 Client Error: Bad Request for url: https://cs.degirum.com/zoo/v1/public/models/myorg/ZooWithCaps)

  2. degirum trace list command entrypoint did not list all available traces. In particular all traces defined in runtime plugins were not included in the list.

  3. In AI server host computer has integrated GPU, then this GPU as well as discrete GPU(s) are used by OpenVINO runtime plugin for GPU-based inferences. Since integrated GPU typically has much lower performance compared to discrete GPUs, this led to significant performance degradation when an inference happens to be scheduled on integrated GPU. Now integrated GPU is ignored if discrete GPU is present in the system.

  4. Loading too many different TFLite models caused AI server host memory exhaustion and AI server crash. To mitigate this bug, in-memory cache size limiting and cache eviction mechanism is implemented.