Skip to content

Quickstart

Note

This quick start guide covers the Hosted Inference use case when you run AI inferences on the DeGirum AI Hub with DeGirum AI accelerator hardware installed in the device farm hosted by DeGirum.

Basic Inference Example

py linenums="1" import degirum as dg zoo = dg.connect(dg.CLOUD, "https://hub.degirum.com", "<cloud token>") model = zoo.load_model("mobilenet_v2_ssd_coco--300x300_quant_n2x_orca_1") result = model("https://docs.degirum.com/images/samples/TwoCats.jpg") display(result.image_overlay)

  1. To start working with PySDK you import degirum package

  2. The main PySDK entry point is degirum.connect function, which creates and returns degirum.zoo_manager.ZooManager object:

    - When instantiated this way, the zoo manager automatically connects to the DeGirum Model Zoo, and you have free access to all AI models from this public model zoo. However, to access the public model zoo, you need a **API access token**, which you can generate on [DeGirum AI Hub Portal](https://hub.degirum.com) under *Management | My Tokens* main menu (see [Generating Access Token](../hub/token.md) for instructions).
    
  3. To perform AI inference using some model, you need to load it using degirum.zoo_manager.ZooManager.load_model method. The load_model() method returns a degirum.model.Model object, which can be used to perform AI inferences. You provide the model name as a method argument. The model name should be one of model names returned by list_models() method.

    !!! tip "Listing all models in a zoo"
        To see the list of all AI models available for inference in the public model zoo,  use `degirum.zoo_manager.ZooManager.list_models` method. It returns a list of strings, where each string is a model name:
        ```python
        model_list = zoo.list_models()
        print(model_list)
        ```
    
  4. To perform inference on a model, you either invoke degirum.model.Model.predict method or simply call the model, with the input image as an argument. The inference result is returned. The input image can be specified in various formats:

    - as a string containing the file name of the image file on the local file system:
    
      !!! example "Inference on a local file"
      `python
    

    result = model("./images/TwoCats.jpg") `

    - as a string containing URL of the image file:
    
      !!! example "Inference on a URL"
    
            ```python
            result = model("https://docs.degirum.com/images/samples/TwoCats.jpg")
            ```
    
    - as a PIL image object:
    
      !!! example "Inference on a PIL image object"
      `python
    

    from PIL import Image image = Image.open("./images/TwoCats.jpg") result = model(image) `

  5. The result object returned by the model (an object derived from degirum.postprocessor.InferenceResults class) contains the following information:

    - numeric inference results
    - graphical inference results
    - original image
    
    Graphical results can be accessed by `degirum.postprocessor.InferenceResults.image_overlay` property. This property returns a graphical object containing an original image with all inference results draw over it. Once you get this object, you may display it, print it, or save it to a file the way you like using the graphical package of your choice. For example, for PIL:
    
    !!! example "Visualizing inference results"
    `python
    

    result_image = result.image_overlay result_image.save("./images/TwoCatsResults.jpg") result_image.show() `

    The original image can be accessed by `degirum.postprocessor.InferenceResults.image` property, which returns graphical
    object, whose type again depends on the graphical package specified for the model `image_backend`.
    

Running PySDK Examples

DeGirum maintains the PySDKExamples repo that contains several jupyter notebooks that illustrate how edge AI applications can be built using PySDK. The example notebooks can perform ML inferences on the following hosting options:

  1. Using DeGirum AI Hub,
  2. On DeGirum AI Server deployed on a localhost or on some computer in your LAN or VPN,
  3. On DeGirum ORCA accelerator directly installed on your local computer.

To try different options, you just need to uncomment one of the lines in the code cell just below the "Specify where do you want to run your inferences" header.

Go to PySDKExamples Repo