Release Date: 07/25/2023
IMPORTANT: This release has changes in PySDK API
New Features and Modifications
AI models of YOLOv8 family are initially supported.
The possibility to define and install custom post-processor is implemented:
degirum.model.Model.custom_postprocessorproperty is added for that purpose.
When you want to work with some new AI model and PySDK does not yet provide post-processor class to interpret model results, then you may want to implement that post-processing code yourself.
Such code typically takes the AI model output tensor data and interprets that raw tensor data to produce some meaningful results like bounding boxes, probabilities, etc. Then it renders these results on a top of original image to produce so-called image overlay.
Starting from version 0.9.0, PySDK provides a way to seamlessly integrate such custom post-processing code so it will behave exactly like built-in post-processors. To do so, you need to complete the following two steps:
- Implement your own custom post-processor class.
- Instruct AI model object to use your custom post-processing class instead of built-in post-processor by assigning your new
custom post-processor class to
Please refer to PySDK User Guide 0.9.0 for more details.
Maximum size limit for AI model runtime in-memory cache is implemented. When the total size of all loaded AI models exceeds this limit, least recently used models are evicted from that cache.
PySDK model filtering functionality (used in
download-zooCLI command) is modified to deal with quantized models. Before, it analyzed
ModelQuantEnmodel parameter. Now it looks for
floatsuffixes in the model name. This is done to address the problem, when models which are internally quantized, have floating-point input/output tensors, and
ModelQuantEnmodel parameter for such models is set to
Concurrent access to Orca accelerator devices from multiple processes results in segmentation fault.
Steps to reproduce:
- Start AI server on a system equipped with Orca accelerator device
- Run Python script, which performs AI inference of any Orca AI model on AI server at localhost:
zoo = dg.connect('localhost', ...)
- Then run Python script, which performs AI inference of any Orca AI model directly on hardware:
zoo = dg.connect(dg.LOCAL, ...)
- Then repeat step b.