Quick Start
Note: This quick start guide covers the Local inference use case when you run AI inferences on the local host with DeGirum AI accelerator hardware installed on this local host as an option. See System Configuration for Specific Use Cases section for more use cases.
Install PySDK package as described in Basic Installation of PySDK Python Package guide.
If your system is equipped with DeGirum AI accelerator hardware, install the kernel driver as described in ORCA Driver Installation guide.
Note: If your system is not equipped with any AI accelerator hardware, the set of AI models available for local inference will be limited only to CPU models.
To start working with PySDK you import degirum package:
The main PySDK entry point is degirum.connect function, which creates and returns degirum.zoo_manager.ZooManager zoo manager object (for detailed explanation of PySDK concepts refer to Model Zoo Manager section):
When instantiated this way, zoo manager automatically connects to DeGirum Public cloud model zoo, and you have free access to all AI models from this public model zoo. However, to access the public cloud zoo you need a cloud API access token, which you can generate on DeGirum Cloud Portal site https://cs.degirum.com under Management | My Tokens main menu item (see more on that in Configuration for Cloud Inference section).
To see the list of all AI models available for inference in the public model zoo, use degirum.zoo_manager.ZooManager.list_models method. It returns a list of strings, where each string is a model name:
If you want to perform AI inference using some model, you need to load it using degirum.zoo_manager.ZooManager.load_model method. You provide the model name as a method argument. The model name should be one of model names returned by degirum.zoo_manager.ZooManager.list_models method, for example:
The degirum.zoo_manager.ZooManager.load_model method returns a degirum.model.Model object, which can be used to perform AI inferences.
Before performing AI inferences, you may want to adjust some model parameters. The model class has a list of parameters which can be modified during runtime. These model parameters affect how the input data is pre-processed and how the inference results are post-processed. All model parameters have reasonable default values, so in the beginning you may skip this step.
Some usable model parameters are:
Property Name | Description | Possible Values |
---|---|---|
degirum.model.Model.image_backend | image processing package to be used | "auto" , "pil" , or "opencv" ;"auto" tries OpenCV first |
degirum.model.Model.input_pad_method | how input image will be padded or cropped when resized | "stretch" , "letterbox" , "crop-first" , or "crop-last" |
degirum.model.Model.input_crop_percentage | percentage of image dimensions to crop around if "input_pad_method" is set to "crop-first" or "crop-last" |
Float value in [0..1] range |
degirum.model.Model.output_confidence_threshold | confidence threshold to reject results with low scores | Float value in [0..1] range |
degirum.model.Model.output_nms_threshold | rejection threshold for non-max suppression | Float value in [0..1] range |
degirum.model.Model.overlay_color | color to draw AI results | Tuple in (R,G,B) format or list of tuples in (R,G,B) format |
degirum.model.Model.overlay_font_scale | font scale to print AI results | Float value |
degirum.model.Model.overlay_show_labels | True to show class labels when drawing AI results |
True/False |
degirum.model.Model.overlay_show_probabilities | True to show class probabilities when drawing AI results |
True/False |
For the complete list of model parameters see Model Parameters section.
Now you are ready to perform AI inference. To do an inference you either invoke degirum.model.Model.predict method or simply call the model, supplying the input image as an argument. The inference result is returned.
A model may accept input images in various formats:
- as a string containing the file name of the image file on the local file system:
- as a string containing URL of the image file:
- as a PIL image object:
- as a numpy array (for example, returned by OpenCV):
import cv2
image = cv2.imread("./images/TwoCats.jpg")
model.input_numpy_colorspace = "BGR" # set colorspace to match OpenCV-produced numpy array
result = model(image)
The result object returned by the model (an object derived from degirum.postprocessor.InferenceResults class) contains the following information:
- numeric inference results
- graphical inference results
- original image
Numeric results can be accessed by degirum.postprocessor.InferenceResults.results property. This property returns a list of result dictionaries, one dictionary per detected object or class. The format of this dictionary is model-dependent. For example, to iterate over all classification model inference results you may do this:
Tip: if you just print your inference result object, all the numeric results will be pretty-printed in YAML format:
Graphical results can be accessed by degirum.postprocessor.InferenceResults.image_overlay
property. This property
returns a graphical object containing an original image with all inference results draw over it. The graphical object
type depends on the graphical package specified for the model image_backend
(if you omit it, it will be OpenCV, if OpenCV
is installed, otherwise PIL). Once you get this object, you may display it, print it, or save it to a file the way
you like using the graphical package of your choice. For example, for PIL:
result_image = result.image_overlay
result_image.save("./images/TwoCatsResults.jpg")
result_image.show()
The original image can be accessed by degirum.postprocessor.InferenceResults.image
property, which returns graphical
object, whose type again depends on the graphical package specified for the model image_backend
.