Skip to content

PySDK Release Notes

Version 0.11.0 (2/10/2024)

New Features and Modifications

  1. Support for different OpenVINO versions is implemented. Now PySDK can work with the following OpenVINO versions:

    • 2022.1.1
    • 2023.2.0
    • 2023.3.0

    When two or more OpenVINO installations are present on a system, the newest version will be used.

  2. Results filtering by class labels and category IDs is implemented: new output_class_set property is added to degirum.model.Model class for this purpose.

    By default, all results are reported by the model predict methods. However, you may want to include only results which belong to certain categories: either having certain class labels or category IDs. To achieve that, you can specify a set of class labels (or, alternatively, category IDs) so only inference results, which class labels (or category IDs) are found in that set, are reported, and all other results are discarded. You assign such a set to degirum.model.Model.output_class_set property.

    For example, you may want to include only results with class labels "car" and "truck":

    # allow only results with "car" and "truck" class labels
    model.output_class_set = {"car", "truck"}
    

    Or you may want to include only results with category IDs 1 and 3:

    # allow only results with 1 and 3 category IDs
    model.output_class_set = {1, 3}
    

    This category filtering is applicable only to models which have "label" (or "category_id") keys in their result dictionaries. For all other models this category filter will be ignored.

Bug Fixes

  1. When two different models have two different Python postprocessor implementations saved into files with the same name, only the first Python postprocessor module gets loaded on AI server. This happens because it is loaded into Python global 'sys.modules` collection as a module named after the file name, and if two files have the same name, they collide.

  2. When an implementation of Python postprocessor in a model gets changed, and that model was already loaded on AI server, then the Python postprocessor module is not reloaded on the next model load. This is because once the Python module is loaded into Python interpreter, it is saved in 'sys.modules` collection, and any attempt to load it again just takes it from there.

  3. Performing inferences with ONNX runtime agent (degirum.model.Model.model_info.RuntimeAgent equal to "ONNX") may cause AI server to crash.


Version 0.10.4 (1/24/2024)

New Features and Modifications

The dependency on CoreClient PySDK module is made on-demand, meaning that CoreClient PySDK module is attempted to load only when local inference is invoked, or when local AI server is started from PySDK. This allows using cloud and AI server client functionality of PySDK on systems with missing CoreClient's module dependencies.

Bug Fixes

Fixed bug in YOLOv8 post-processor affecting models with non-square input tensors. Previously Y-coordinate (height) of all detections coming from YOLOv8 models with input image resolutions with width not equal to height would be misinterpreted; now the behavior is correct.


Version 0.10.3 (1/17/2024)

New Features and Modifications

  1. ORCA1 firmware version 1.1.9 is included in this release. This firmware implements measures to improve data integrity of DDR4 external memory when entering/leaving low-power mode.

  2. To avoid any possible future incompatibilities, the PySDK package requirements now explicitly limit upper versions for all dependencies to be one major revision more than corresponding lower version. For example: requests >= 2.30.0 becomes requests >= 2.30.0, < 3.0.

  3. AI annotations drawing performance is greatly improved for object detection annotations.

  4. Default value for alpha blending coefficient is set to 1.0: disable blending. This is performance-improvement measure.

  5. Color selection for different classes in case when a list of colors is assigned to degirum.model.Model.overlay_color property, is improved. It is performed based on the class ID, if the object class ID is in the model dictionary. Otherwise new unique color is assigned to the class and it is associated with the class label. This mechanism produces stable color-to-class assignment from frame to frame and also allows combining results of multiple different models on a single annotation, assigning different colors to classes which may have the same class IDs but different class labels.

  6. Printing scores on AI annotations is now performed with type-dependent format: if the score is of integer type, there will be no fractional part. This improves readability in case of regression models producing integer results.

  7. Quality of OpenCV font used for AI annotations is improved.

  8. Model statistics formatting now uses wider columns to accommodate long statistics.


Version 0.10.2 (12/1/2023)

Discontinued Functionality

The N2X compiler support for DeGirum Orca 1.0 devices is discontinued. Starting from this version, N2X compiler cannot compile models for Orca 1.0 devices: only Orca 1.1 devices are supported.

However, runtime operations for Orca 1.0 devices are still fully supported: you can continue to use Orca 1.0 devices with already compiled models.

Bug Fixes

degirum server rescan-zoo and degirum server shutdown CLI commands do not work with new HTTP AI servers protocols. An attempt to execute such commands for AI servers launched with HTTP protocol option causes error messages.


Version 0.10.1 (11/2/2023)

New Features and Modifications

  1. The HTTP+WebSocket AI server protocol is initially supported for DeGirum AI Server.

    Starting from PySDK version 0.10.0, AI server supports two protocols: asio and http. The asio protocol is DeGirum custom socket-based AI server protocol, supported by all previous PySDK versions. The http protocol is a new protocol, which is based on REST HTTP requests and WebSockets streaming. The http protocol allows to use AI server from any programming language, which supports HTTP requests and WebSockets, such as browser-based JavaScript, which does not support native sockets, thus precluding the use of asio protocol.

    When you start AI server by executing degirum server start command, you specify the protocol using --protocol parameter, which can be asio, http, or both.

    If you omit this parameter, asio protocol will be used by default to provide compatible behavior with previous PySDK verisions.

    You select the http protocol by specifying --protocol http.

    You may select both protocols by specifying --protocol both. In this case, AI server will listen to both protocols on two consecutive TCP ports: the first port is used for asio protocol, the second port is used for http protocol.

    For example: start AI server to serve models from ./my-zoo directory, use asio protocol on port 12345, and use http protocol on port 12346:

    degirum server start --zoo ./my-zoo --port 12345 --protocol both
    

    On a client side, when you connect to AI server with http protocol, you have to prefix AI server hostname with http:// prefix, for example:

    zoo = dg.connect("http://localhost")
    

    To connect to AI server with asio protocol you simply omit the protocol prefix.

  2. Now you may pass arbitrary model properties (properties of degirum.model.Model`` class) as keyword arguments todegirum.zoo_manager.ZooManager.load_model` method. In this case these properties will be assigned to the model object.

    For example:

    model = zoo.load_model(model_name, output_confidence_threshold=0.5, input_pad_method="letterbox")
    
  3. Multi-classifier (or multi-label) classification models are initially supported. The post-processor type string, which is assigned to OutputPostprocessType model parameter, is "MultiLabelClassification". Each inference result dictionary contains the following keys:

    • classifier: object class string.
    • results: list of class labels and its scores. Scores are optional.

    The results list element is a dictionary with the following keys:

    • label: class label string.
    • score: optional class label probability.

    Example:

    [
        {
            'classifier': 'vehicle color',
            'results': [
                {'label': 'red', 'score': 0.99},
                {'label': 'blue', 'score': 0.01}
            ]
        },
        {
            'classifier': 'vehicle type',
            'results': [
                {'label': 'car', 'score': 0.99},
                {'label': 'truck', 'score': 0.01}   
            ]
        }   
    ]
    

Bug Fixes

  1. Unclear error message 'NoneType' object has no attribute 'shape' appears when supplying non-existing file for model inference.

  2. Local AI inference of a model with Python post-processor hangs on model destruction due to Python GIL deadlock.

  3. degirum sys-info command re-initializes DeGirum Orca AI accelerator hardware not in interprocess-safe way, disrupting operation of other processes using the same Orca accelerator hardware. The first attempt to fix this bug was in PySDK version 0.9.6, this release finally fixes this bug.


Version 0.9.6 (10/17/2023)

New Features and Modifications

  1. New CoreInputFrameSize_bytes statistic is added to the inference statistics collection reported by degirum.model.Model.time_stats() method. This statistic represents the size (in bytes) of input frames received by AI model for inference.

Bug Fixes

  1. degirum sys-info command re-initializes DeGirum Orca AI accelerator hardware not in interprocess-safe way, disrupting operation of other processes using the same Orca accelerator hardware. For example, AI server is running on a system equipped with Orca accelerator hardware, and degirum sys-info command is executed on the same system. In this case the following error appears: "RPC control is in inconsistent state. Software recovery is not possible: please restart device".

  2. When performing AI inference of CPU-based ONNX quantized models using OpenVINO runtime on host computers equipped with modern AMD CPUs such as AMD RYZEN, the following error message appears: "Check 'false' failed at src/inference/src/core.cpp:131: could not create a primitive ... [ERROR]<continued> Failed to compile".

  3. When performing AI inference of models with audio input frame type, the following error message appears: "There is no Image inputs in model".


Version 0.9.3 (10/02/2023)

New Features and Modifications

  1. Inference on Intel® Arc™ family of GPUs with OpenVINO™ runtime is initially supported.

  2. Pretty-printing is implemented for model inference statistics. If model is an instance of degirum.model.Model class, then print(model.time_stats()) will print inference statistics in tabulated form, similar to the text below:

    Statistic                     ,    Min,    Avg,    Max,    Cnt
    PythonPreprocessDuration_ms   ,   5.00,   5.00,   5.00,      1
    CoreInferenceDuration_ms      , 349.94, 349.94, 349.94,      1
    CoreLoadResultDuration_ms     ,   0.02,   0.02,   0.02,      1
    CorePostprocessDuration_ms    ,   0.09,   0.09,   0.09,      1
    CorePreprocessDuration_ms     ,   2.78,   2.78,   2.78,      1
    DeviceInferenceDuration_ms    ,   0.00,   0.00,   0.00,      1
    FrameTotalDuration_ms         , 610.34, 610.34, 610.34,      1
    

    str(model.time_stats()) expression will return the same text.

Bug Fixes

  1. Python post-processor support was broken. An attempt to specify Python post-processor for a model led to the following error: Model postprocessor type is not known: Python. Starting from this version, the Python post-processor support is restored the following way. If you want to use Python post-processor, then:

    • you need to specify the name of the Python file with your Python post-processor implementation in the PythonFile parameter of the POST_PROCESS section;
    • you need to specify one of supported PySDK post-processor types in the OutputPostprocessType parameter of the POST_PROCESS section;
    • the result format generated by your Python post-processor must be compatible with the PySDK post-processor type specified in the OutputPostprocessType parameter.

    Currently supported post-processor types are:

    • "None"
    • "Classification"
    • "Detection"
    • "FaceDetection"
    • "PoseDetection"
    • "HandDetection"
    • "Segmentation"

    The corresponding result formats are described in PySDK User's Guide, degirum.postprocessor.InferenceResults.results attribute description.

    For security reasons, at the time of this release the DeGirum cloud platform does not allow uploading to cloud model zoos models with Python post-processor for regular accounts: only cloud platform administrators can do it.

  2. TFLite runtime plugin was missing in PySDK package for Windows.


Version 0.9.2 (09/06/2023)

New Features and Modifications

  1. Plugin for ONNX runtime is initially supported. This plugin allows performing inferences of ONNX AI models directly on AI server host CPU without any AI accelerator.

    The ONNX runtime delivers better performance compared to OpenVINO™ runtime on ARM64 platforms, while OpenVINO&trade runtime delivers better performance than ONNX runtime on x86-64 platforms.

  2. Default values for some model properties are changed. The following is the list of changes:

    • degirum.model.Model.input_image_format:

      Was: "JPEG" for cloud inference, "RAW" for all other inference types

      Now: "RAW" for local inference, "JPEG" for all other inference types

    • degirum.model.Model.input_numpy_colorspace:

      Was: "RGB"

      Now: "auto", meaning it will be "BGR" for OpenCV backend and "RGB" for PIL backend

  3. The meaning of "auto" selection for degirum.model.Model.image_backend property has changed:

    Was: try to use PIL first, and if not installed, use OpenCV

    Now: try to use OpenCV first, and if not installed, use PIL

  4. AI server protocol is improved for robustness. To deal with unreliable network connection, the following retries have been implemented:

    • Retry on a client side when connecting to a server

    • Retry on a server side when connecting to a cloud zoo

  5. In-memory cache size limiting and cache eviction mechanism is implemented for AI server model cache. This greatly improves AI server robustness in case when multiple models were requested for inference during AI server lifetime: loading too many different models caused host memory exhaustion and AI server crash.

Bug Fixes

  1. If a cloud model zoo has capital letters in its name, there was no possibility to load models from such zoo. The following error message appeared in such case:

    DegirumException: Model zoo 'myorg/ZooWithCaps' is not found. (cloud server response: 400 Client Error: Bad Request for url: https://cs.degirum.com/zoo/v1/public/models/myorg/ZooWithCaps)

  2. degirum trace list command entrypoint did not list all available traces. In particular all traces defined in runtime plugins were not included in the list.

  3. In AI server host computer has integrated GPU, then this GPU as well as discrete GPU(s) are used by OpenVINO runtime plugin for GPU-based inferences. Since integrated GPU typically has much lower performance compared to discrete GPUs, this led to significant performance degradation when an inference happens to be scheduled on integrated GPU. Now integrated GPU is ignored if discrete GPU is present in the system.

  4. Loading too many different TFLite models caused AI server host memory exhaustion and AI server crash. To mitigate this bug, in-memory cache size limiting and cache eviction mechanism is implemented.


Version 0.9.1 (08/04/2023)

IMPORTANT: This release has critical bug fixes and incompatible changes with version 0.9.0. It is strongly recommended to upgrade from version 0.9.0 to version 0.9.1.

New Features and Modifications

  1. AI models of YOLOv8 family are redesigned to improve performance.

    New YOLOv8 models in the DeGirum public cloud zoo are incompatible with PySDK version 0.9.0, so in order to use YOLOv8 models you need to upgrade to version 0.9.1.

  2. Inference timeout handling is improved.

    • HTTP transactions of AI server with the cloud zoo had infinite timeouts in previous PySDK versions. This can lead to arbitrary long delays in case of poor connection between AI server and cloud zoo. In new version all such HTTP transactions now have finite timeouts.

    • Model-specific inference timeout is now passed to the cloud server, so the cloud server uses it instead of generic 180 second timeout. This greatly improves the cloud inference responsiveness.

Bug Fixes

  1. The following error message appears when you use PySDK version 0.9.0 and start AI server on systems equipped with ORCA1 AI hardware accelerator: Firmware image file 'orca-1.1.fi' is invalid or corrupt.

    This effectively prevents any usage of ORCA1 AI hardware accelerator with PySDK version 0.9.0, so you need to upgrade to version 0.9.1.


Version 0.9.0 (07/25/2023)

IMPORTANT: This release has changes in PySDK API

New Features and Modifications

  1. AI models of YOLOv8 family are initially supported.

  2. The possibility to define and install custom post-processor is implemented: degirum.model.Model.custom_postprocessor property is added for that purpose.

    When you want to work with some new AI model and PySDK does not yet provide post-processor class to interpret model results, then you may want to implement that post-processing code yourself.

    Such code typically takes the AI model output tensor data and interprets that raw tensor data to produce some meaningful results like bounding boxes, probabilities, etc. Then it renders these results on a top of original image to produce so-called image overlay.

    Starting from version 0.9.0, PySDK provides a way to seamlessly integrate such custom post-processing code so it will behave exactly like built-in post-processors. To do so, you need to complete the following two steps:

    1. Implement your own custom post-processor class.
    2. Instruct AI model object to use your custom post-processing class instead of built-in post-processor by assigning your new custom post-processor class to degirum.model.Model.custom_postprocessor property.

    Please refer to PySDK User Guide 0.9.0 for more details.

  3. Maximum size limit for AI model runtime in-memory cache is implemented. When the total size of all loaded AI models exceeds this limit, least recently used models are evicted from that cache.

  4. PySDK model filtering functionality (used in degirum.zoo_manager.ZooManager.list_models method and download-zoo CLI command) is modified to deal with quantized models. Before, it analyzed ModelQuantEn model parameter. Now it looks for quant or float suffixes in the model name. This is done to address the problem, when models which are internally quantized, have floating-point input/output tensors, and ModelQuantEn model parameter for such models is set to false.

Bug Fixes

  1. Concurrent access to Orca accelerator devices from multiple processes results in segmentation fault.

    Steps to reproduce:

    1. Start AI server on a system equipped with Orca accelerator device
    2. Run Python script, which performs AI inference of any Orca AI model on AI server at localhost: zoo = dg.connect('localhost', ...)
    3. Then run Python script, which performs AI inference of any Orca AI model directly on hardware: zoo = dg.connect(dg.LOCAL, ...)
    4. Then repeat step b.