Release Date: 11/2/2023
New Features and Modifications
The HTTP+WebSocket AI server protocol is initially supported for DeGirum AI Server.
Starting from PySDK version 0.10.0, AI server supports two protocols:
asioprotocol is DeGirum custom socket-based AI server protocol, supported by all previous PySDK versions. The
httpprotocol is a new protocol, which is based on REST HTTP requests and WebSockets streaming. The
When you start AI server by executing
degirum server startcommand, you specify the protocol using
--protocolparameter, which can be
If you omit this parameter,
asioprotocol will be used by default to provide compatible behavior with previous PySDK verisions.
You select the
httpprotocol by specifying
You may select both protocols by specifying
--protocol both. In this case, AI server will listen to both protocols on two consecutive TCP ports: the first port is used for
asioprotocol, the second port is used for
For example: start AI server to serve models from
asioprotocol on port 12345, and use
httpprotocol on port 12346:
On a client side, when you connect to AI server with
httpprotocol, you have to prefix AI server hostname with
http://prefix, for example:
To connect to AI server with
asioprotocol you simply omit the protocol prefix.
Now you may pass arbitrary model properties (properties of
degirum.model.Model`` class) as keyword arguments todegirum.zoo_manager.ZooManager.load_model` method. In this case these properties will be assigned to the model object.
Multi-classifier (or multi-label) classification models are initially supported. The post-processor type string, which is assigned to
OutputPostprocessTypemodel parameter, is
"MultiLabelClassification". Each inference result dictionary contains the following keys:
classifier: object class string.
results: list of class labels and its scores. Scores are optional.
resultslist element is a dictionary with the following keys:
label: class label string.
score: optional class label probability.
Unclear error message
'NoneType' object has no attribute 'shape'appears when supplying non-existing file for model inference.
Local AI inference of a model with Python post-processor hangs on model destruction due to Python GIL deadlock.
degirum sys-infocommand re-initializes DeGirum Orca AI accelerator hardware not in interprocess-safe way, disrupting operation of other processes using the same Orca accelerator hardware. The first attempt to fix this bug was in PySDK version 0.9.6, this release finally fixes this bug.