Quickstart
Note
This quick start guide covers the Cloud Inference use case when you run AI inferences on the DeGirum Cloud Platform with DeGirum AI accelerator hardware installed in the device farm hosted by DeGirum.
Basic Inference Example
-
To start working with PySDK you import
degirum
package -
The main PySDK entry point is
degirum.connect
function, which creates and returnsdegirum.zoo_manager.ZooManager
object:- When instantiated this way, zoo manager automatically connects to DeGirum Public cloud model zoo, and you have free access to all AI models from this public model zoo. However, to access the public cloud zoo you need a cloud API access token, which you can generate on DeGirum Cloud Portal under Management | My Tokens main menu item (see Generating Access Token for instructions).
-
To perform AI inference using some model, you need to load it using
degirum.zoo_manager.ZooManager.load_model
method. Theload_model()
method returns adegirum.model.Model
object, which can be used to perform AI inferences. You provide the model name as a method argument. The model name should be one of model names returned bylist_models()
method. -
To perform inference on a model, you either invoke
degirum.model.Model.predict
method or simply call the model, with the input image as an argument. The inference result is returned. The input image can be specified in various formats:-
as a string containing the file name of the image file on the local file system:
-
as a string containing URL of the image file:
-
as a PIL image object:
-
-
The result object returned by the model (an object derived from
degirum.postprocessor.InferenceResults
class) contains the following information:- numeric inference results
- graphical inference results
- original image
Graphical results can be accessed by
degirum.postprocessor.InferenceResults.image_overlay
property. This property returns a graphical object containing an original image with all inference results draw over it. Once you get this object, you may display it, print it, or save it to a file the way you like using the graphical package of your choice. For example, for PIL:Visualizing inference results
The original image can be accessed by
degirum.postprocessor.InferenceResults.image
property, which returns graphical object, whose type again depends on the graphical package specified for the modelimage_backend
.
Running PySDK Examples
DeGirum maintains the PySDKExamples repo that contains several jupyter notebooks that illustrate how edge AI applications can be built using PySDK. The example notebooks can perform ML inferences on the following hosting options:
- Using DeGirum Cloud Platform,
- On DeGirum AI Server deployed on a localhost or on some computer in your LAN or VPN,
- On DeGirum ORCA accelerator directly installed on your local computer.
To try different options, you just need to uncomment one of the lines in the code cell just below the "Specify where do you want to run your inferences" header.