Skip to content

Quickstart

Note

This quick start guide covers the Cloud Inference use case when you run AI inferences on the DeGirum Cloud Platform with DeGirum AI accelerator hardware installed in the device farm hosted by DeGirum.

Basic Inference Example

1
2
3
4
5
import degirum as dg
zoo = dg.connect(dg.CLOUD, "https://cs.degirum.com", "<cloud token>")
model = zoo.load_model("mobilenet_v2_ssd_coco--300x300_quant_n2x_orca_1")
result = model("https://docs.degirum.com/images/samples/TwoCats.jpg")
display(result.image_overlay)
  1. To start working with PySDK you import degirum package

  2. The main PySDK entry point is degirum.connect function, which creates and returns degirum.zoo_manager.ZooManager object:

    • When instantiated this way, zoo manager automatically connects to DeGirum Public cloud model zoo, and you have free access to all AI models from this public model zoo. However, to access the public cloud zoo you need a cloud API access token, which you can generate on DeGirum Cloud Portal under Management | My Tokens main menu item (see Generating Access Token for instructions).
  3. To perform AI inference using some model, you need to load it using degirum.zoo_manager.ZooManager.load_model method. The load_model() method returns a degirum.model.Model object, which can be used to perform AI inferences. You provide the model name as a method argument. The model name should be one of model names returned by list_models() method.

    Listing all models in a zoo

    To see the list of all AI models available for inference in the public model zoo, use degirum.zoo_manager.ZooManager.list_models method. It returns a list of strings, where each string is a model name:

    model_list = zoo.list_models()
    print(model_list)
    

  4. To perform inference on a model, you either invoke degirum.model.Model.predict method or simply call the model, with the input image as an argument. The inference result is returned. The input image can be specified in various formats:

    • as a string containing the file name of the image file on the local file system:

      Inference on a local file

      result = model("./images/TwoCats.jpg")
      
    • as a string containing URL of the image file:

      Inference on a URL

      result = model("https://docs.degirum.com/images/samples/TwoCats.jpg")
      
    • as a PIL image object:

      Inference on a PIL image object

      from PIL import Image
      image = Image.open("./images/TwoCats.jpg")
      result = model(image)
      
  5. The result object returned by the model (an object derived from degirum.postprocessor.InferenceResults class) contains the following information:

    • numeric inference results
    • graphical inference results
    • original image

    Graphical results can be accessed by degirum.postprocessor.InferenceResults.image_overlay property. This property returns a graphical object containing an original image with all inference results draw over it. Once you get this object, you may display it, print it, or save it to a file the way you like using the graphical package of your choice. For example, for PIL:

    Visualizing inference results

    result_image = result.image_overlay
    result_image.save("./images/TwoCatsResults.jpg")
    result_image.show()
    

    The original image can be accessed by degirum.postprocessor.InferenceResults.image property, which returns graphical object, whose type again depends on the graphical package specified for the model image_backend.

Running PySDK Examples

DeGirum maintains the PySDKExamples repo that contains several jupyter notebooks that illustrate how edge AI applications can be built using PySDK. The example notebooks can perform ML inferences on the following hosting options:

  1. Using DeGirum Cloud Platform,
  2. On DeGirum AI Server deployed on a localhost or on some computer in your LAN or VPN,
  3. On DeGirum ORCA accelerator directly installed on your local computer.

To try different options, you just need to uncomment one of the lines in the code cell just below the "Specify where do you want to run your inferences" header.

Go to PySDKExamples Repo