Skip to content

System Configuration for Specific Use Cases

The PySDK package can be used in the following use cases:

  • Local inference: running AI inferences on the local host with DeGirum AI accelerator hardware installed on this local host as an option
  • AI Server inference: running AI inferences on remote AI server host with DeGirum AI accelerator hardware installed on that remote host
  • Cloud Inference: running AI inferences on DeGirum Cloud Platform, which automatically manages a set of AI computing nodes equipped with different types of AI accelerator hardware including DeGirum Orca, Google EdgeTPU, and Intel Myriad
  • AI Server hosting: configuring the host with DeGirum AI accelerator hardware as AI server to be used for remote AI inferences

The following sections provide step-by-step instructions how to setup the system for the particular use case. For detailed explanation of PySDK concepts refer to Model Zoo Manager section.

Configuration for Local Inference

  1. Install PySDK package as described in Basic Installation of PySDK Python Package guide.

  2. If your system is equipped with DeGirum AI accelerator hardware, install the kernel driver as described in ORCA Driver Installation guide.

Note: If your system is not equipped with any AI accelerator hardware, the set of AI models available for local inference will be limited only to CPU models.

  1. For local inferences, you call degirum.connect passing degirum.LOCAL constant or "@local" string as a first argument.

  2. If you plan to operate with DeGirum-hosted cloud model zoo, then you call degirum.connect providing the cloud model zoo URL in the "https://cs.degirum.com[/<zoo URL>]" format and the cloud API access token as the second and third arguments:

zoo = dg.connect(dg.LOCAL, "https://cs.degirum.com/my_organization/my_zoo", 
   token="<your cloud API access token>")

You can obtain the model zoo URL on DeGirum Cloud Portal site https://cs.degirum.com under Management | Models main menu item. If <zoo URL> suffix is not specified, then DeGirum Public cloud model zoo will be used.

You can generate your token on DeGirum Cloud Portal site https://cs.degirum.com under Management | My Tokens main menu item.

  1. If you plan to operate with locally deployed model, then you call degirum.connect providing the the full path to the model JSON file as the second argument, omitting the third argument:
zoo = dg.connect(dg.LOCAL, "full/path/to/model.json")

Configuration for AI Server Hosting

  1. Install the kernel driver as described in ORCA Driver Installation guide.
  2. Follow instructions provided in Configuring and Launching AI Server section.

Configuration for AI Server Inference

Make sure your AI server host is already configured as described in the Configuration for AI Server Hosting section, and it is up and running.

  1. Install PySDK package as described in Basic Installation of PySDK Python Package guide.

  2. If you plan to operate with a model zoo, locally deployed on the AI server host system, you call degirum.connect providing just the hostname or IP address of the AI server host you want to use for AI inference:

zoo = dg.connect("192.168.0.118")
  1. If you plan to operate with AI server taking models from a cloud model zoo of your choice, you call degirum.connect providing the hostname or IP address of the AI server host as the first argument, the cloud zoo URL as the second argument, and the cloud API access token as the third argument. The cloud model zoo URL you specify in the "https://cs.degirum.com[/<zoo URL>]" format:
zoo = dg.connect( "192.168.0.118", "https://cs.degirum.com/my_organization/my_zoo",
   token="<your cloud API access token>")

You can obtain the model zoo URL on DeGirum Cloud Portal site under Management | Models main menu item. If <zoo URL> suffix is not specified, then DeGirum Public cloud model zoo will be used.

You can generate your token on DeGirum Cloud Portal site under Management | My Tokens main menu item.

Configuration for Cloud Inference

Starting from ver. 0.3.0 the PySDK supports inferences on DeGirum Cloud Platform, and starting from ver. 0.5.0 the PySDK supports cloud model zoo access.

DeGirum Cloud Platform solves the Edge AI development problem by providing the toolchain to design, run, and evaluate ML models across different hardware platforms in the cloud.

DeGirum Cloud Platform includes the following components:

  1. DeGirum Cloud Device Farm accessible through the Cloud Application Server and PySDK
  2. DeGirum Cloud Portal Management site, cs.degirum.com
  3. Cloud model zoos hosted on DeGirum Cloud Platform.

The DeGirum Cloud Device Farm is a set of computing nodes with various AI hardware accelerators installed in those nodes, including: DeGirum Orca, Google Edge TPU, Intel® Movidius™ Myriad™ VPU. The farm nodes are hosted by DeGirum.

The Cloud Application Server provides web API to perform AI inference on the Cloud Farm devices. Starting from ver. 0.3.0 PySDK supports Application Server web API and provides the same level of functionality transparently to the end user: you may run exactly the same code on the Cloud Platform as it was designed for traditional use cases such as local inference or AI server inference.

The cloud model zoo is a collection of AI models, stored on DeGirum Cloud Platform. A registered DeGirum Cloud Platform user can create and maintain multiple cloud zoos with either private or public access. A model zoo with private access is visible to all users belonging to the same organization. A model zoo with public access is visible to all registered users of DeGirum Cloud Platform. DeGirum maintains the Public Model Zoo with extensive set of AI models available free of charge to all registered users.

The DeGirum Cloud Portal Management site provides GUI to get access to the various Cloud Platform assets such as:

  • cloud API access tokens, which are required to access the Cloud Application Server through PySDK;
  • cloud model zoos belonging to the user's organization;
  • AI model compilers;
  • AI model evaluation tools;
  • PySDK documentation;
  • PySDK examples.

To get started with DeGirum Cloud Platform perform the following steps:

  1. Register on DeGirum Cloud platform using this link.

  2. Generate API access token as described in Generating Access Token guide.

  3. Once the token is generated, you copy it into a clipboard by clicking the Copy button. Please save the token string in some secure place: you will need this token string on later steps in order to access the Cloud Application Server via PySDK.

  4. If you develop new Python script with DeGirum PySDK or if you already have a Python script which uses DeGirum PySDK and which you want to run on DeGirum Cloud Platform, then find the line of code, which invokes degirum.connect method and change it the following way:

    zoo = dg.connect(dg.CLOUD, "https://cs.degirum.com", token = "<your cloud API access token>" )
    

Here "https://cs.degirum.com" is the URL of the DeGirum Cloud Application Server, and "<your cloud API access token>" is the token string you generated on the previous step.

All the rest of your code does not need any modifications compared to traditional use cases: PySDK support of the Cloud Platform is completely transparent to the end user.

If you specify just "https://cs.degirum.com" URL, then the DeGirum Public cloud model zoo will be used.

If you want to work with the cloud model zoo of your choice, then specify the URL in the "https://cs.degirum.com/<zoo URL>" format. Here <zoo URL> suffix is the name of the cloud model zoo in the form of <organization>/<zoo>. You can obtain the model zoo URL suffix on DeGirum Cloud Portal site under Management | Models main menu item: just select the model zoo you want to access to open the model zoo page and click copy button near the model zoo name to copy the model zoo URL suffix into the clipboard.