Getting Started with DeGirumJS

Welcome to the DeGirum JavaScript AI Inference SDK! This guide will help you get started with integrating AI inference capabilities into your web application. Follow the steps below to set up your environment, connect to the AI server, and run inference on an image.

Table of Contents

Introduction

The JavaScript AI Inference SDK allows you to connect to AI Server or Cloud Zoo instances, load AI models, and perform inference on various data types. This guide provides a step-by-step tutorial on how to use the SDK effectively.

Setup

Import the SDK

To start using the SDK, include the following script tag in your HTML file:

<script src="https://docs.degirum.com/degirumjs/0.0.9/degirum-js.min.obf.js"></script>

Basic Usage

Connect to an AI Server

Instantiate the dg_sdk class and connect to the AI server using the connect method:

let dg = new dg_sdk();
const AISERVER_IP = 'ws://localhost:8779';

let zoo = dg.connect(AISERVER_IP);

For running AI Server inference on cloud models, include the URL of the cloud zoo and your token:

let dg = new dg_sdk();
const AISERVER_IP = 'ws://localhost:8779';
const ZOO_URL = 'https://cs.degirum.com/degirum/public';
const secretToken = prompt('Enter secret token:');

let zoo = dg.connect(AISERVER_IP, ZOO_URL, secretToken);

Connect to the Cloud

For running Cloud inference, specify 'cloud' as the first argument, and include the URL of the cloud zoo and your token:

let dg = new dg_sdk();
const ZOO_URL = 'https://cs.degirum.com/degirum/public';
const secretToken = prompt('Enter secret token:');

let zoo = dg.connect('cloud', ZOO_URL, secretToken);

Load a Model

Now, you can load a model using the zoo class instance's loadModel method:

const MODEL_NAME = 'yolo_v5s_face_det--512x512_quant_n2x_cpu_1';
const modelOptions = {
    inputPadMethod: 'stretch'
};

let model = await zoo.loadModel(MODEL_NAME, modelOptions);

Perform Inference

Use the predict method to perform inference on an input image:

const image = '';
const result = await model.predict(image);
console.log('Result:', result);

Displaying Results

You can display prediction results to a HTMLCanvasElement:

// Assuming your Canvas Element has the id 'outputCanvas'
let canvas = document.getElementById('outputCanvas');
model.displayResultToCanvas(result, canvas);

Understanding the Result Object Structure
The result object contains the predictions made by the model, such as detected objects, classes, probabilities, bounding boxes, and more. For a detailed breakdown of the structure and properties of the result object, please refer to the Result Object Structure documentation.

Simple Example HTML Page

To get started with a simple example page, we need the following HTML elements on the page:

The script tag to import DeGirumJS
A canvas element to display inference results.
An input element to browse and upload images.

Here is a HTML page that will perform inference on uploaded images and display the results:

<script src="https://docs.degirum.com/degirumjs/0.0.9/degirum-js.min.obf.js"></script>
<canvas id="outputCanvas" width="400" height="400"></canvas>
<input type="file" id="imageInput" accept="image/*">
<script type="module">
    // Grab the outputCanvas and imageInput elements by ID:
    const canvas = document.getElementById('outputCanvas');
    const input = document.getElementById('imageInput');
    
    // Initialize the SDK
    let dg = new dg_sdk();
    // Query the user for the cloud token:
    const secretToken = prompt('Enter secret token:');
    // Inference settings
    const MODEL_NAME = 'yolo_v5s_face_det--512x512_quant_n2x_cpu_1';
    const ZOO_URL = 'https://cs.degirum.com/degirum/public';
    const AISERVER_IP = 'ws://localhost:8779';
    
    // Connect to the cloud zoo
    let zoo = dg.connect(AISERVER_IP, ZOO_URL, secretToken);
    
    // Model options
    const modelOptions = {
        overlayShowProbabilities: true
    };
    // Load the model with the options
    let model = await zoo.loadModel(MODEL_NAME, modelOptions);
    
    // Function to run inference on uploaded files
    input.onchange = async function () {
        let file = input.files[0];
        // Predict
        let result = await model.predict(file);
        console.log('Result from file:', result);
        // Display result to canvas
        model.displayResultToCanvas(result, canvas);
    }
</script>

Model Options

When loading a model, you can specify various options to customize its behavior:

To destroy / clean up a model instance, use the cleanup method:

await model.cleanup();

This will stop all running inferences and clean up resources used by the model instance.

Device Management for Inference

Both AIServerModel and CloudServerModel classes offer flexible ways to manage device types, allowing you to configure and switch between devices dynamically.

Supported Device Types

Each model has a set of SupportedDeviceTypes, which indicates the runtime/device combinations that are compatible for inference. The format for device types is "RUNTIME/DEVICE", where:

AIServerModel / CloudServerModel Device Management

In the AIServerModel and CloudServerModel classes, device management is integrated into both the initialization and runtime phases of the model lifecycle. Below are key scenarios and examples:

  1. Default Device Type Selection: When you load a model without specifying a device type, the default device type specified in the model parameters is selected.

    let model = await zoo.loadModel('your_model_name');
    console.log(model.deviceType); // Outputs: "DefaultRuntime/DefaultAgent"
    
  2. Switching Device Types After Initialization: You can change the device type even after the model has been initialized. The model will validate the requested device type against the system’s supported device types.

    model.deviceType = 'RUNTIME2/CPU';
    console.log(model.deviceType); // Outputs: "RUNTIME2/CPU"
    

    If the requested device type is not valid, an error will be thrown.

  3. Specifying a Device Type During Initialization: You can specify a device type when loading the model. The model will start with the specified device type if it’s available.

    let model = await zoo.loadModel('your_model_name', { deviceType: 'RUNTIME2/CPU' });
    console.log(model.deviceType); // Outputs: "RUNTIME2/CPU"
    
  4. Handling Multiple Device Types: The SDK allows you to provide a list of device types. The first available option in the list will be selected.

    model.deviceType = ['RUNTIME3/CPU', 'RUNTIME1/CPU'];
    console.log(model.deviceType); // Outputs: "RUNTIME3/CPU" if available, otherwise "RUNTIME1/CPU"
    
  5. Fallback and Error Handling: If none of the specified device types are supported, the model will throw an error, ensuring that only valid configurations are used.

    try {
        model.deviceType = ['INVALID/DEVICE', 'ANOTHER_INVALID/DEVICE'];
    } catch (e) {
        console.error('Error: Invalid device type selection');
    }
    
  6. Supported Device Types: You can check the supported device types for a model using the supportedDeviceTypes property.

    console.log(model.supportedDeviceTypes); // Outputs: ["RUNTIME1/CPU", "RUNTIME2/CPU"]
    
  7. System Supported Device Types You can check the system’s list of supported devices for inference using the getSupportedDevices() method of the dg_sdk class.

    let dg = new dg_sdk();
    let aiserverDevices = dg.getSupportedDevices('targetAIServerIp');
    console.log(supportedDevices); // Outputs: ["RUNTIME1/CPU", "RUNTIME2/CPU", "RUNTIME3/CPU"]
    let cloudDevices = dg.getSupportedDevices('cloud');
    console.log(supportedDevices); // Outputs: ["RUNTIME1/CPU", "RUNTIME2/CPU", "RUNTIME3/CPU"]
    

Device management in both AIServerModel and CloudServerModel is designed to be flexible, allowing you to fine-tune the inference environment. You can easily switch between device types, handle fallbacks, and ensure that your models are always running on supported configurations.

Measure Time Usage

Enabling the measureTime flag will create a timeStats object within the model which holds various statistics (max, min, count, average) to track how long certain operations took.

Operations Tracked

  1. ImagePreprocessDuration_ms: Time taken for preprocessing the input image.
  2. CorePreprocessDuration_ms: Duration of server-side pre-processing step.
  3. CoreInferenceDuration_ms: Time taken for the actual inference operation on the server (between sending frame and receiving results).
  4. CoreLoadResultDuration_ms: Duration of server-side data movement step.
  5. CorePostprocessDuration_ms: Duration of server-side post-processing step.
  6. PythonPreprocessDuration_ms: Duration of client-side pre-processing step including data loading and data conversion time.
  7. FrameTotalDuration_ms: Total duration from calling the predict() or predict_batch() method to receiving the results.
  8. DeviceInferenceDuration_ms: (Orca models only) Duration of AI inference computations on AI accelerator hardware (DeGirum Orca).
  9. DeviceTemperature_C: (Orca models only) Internal temperature of AI accelerator hardware in Celsius (DeGirum Orca).
  10. DeviceFrequency_MHz: (Orca models only) Working frequency of AI accelerator hardware in MHz (DeGirum Orca).

Available methods

  1. getTimeStats(): Use this method to return a formatted string of all the statistics collected so far.
  2. resetTimeStats(): Use this method to delete all your old statistics and create a fresh timeStats object to collect more statistics with.
  3. To access the timeStats object directly, you can use modelName.timeStats.stats["statName"], where the statName is one of the operations tracked.

Example usage

let model = await zoo.loadModel('your_model_name', { measureTime: true });
let result = await model.predict(image);
console.log(model.getTimeStats()); // Pretty print time stats

// Access client-side and server-side timing stats
let preprocessDuration = model.timeStats.stats["ImagePreprocessDuration_ms"]; // Get image preprocess duration (min, avg, max, count)
let preprocessMin = model.timeStats.stats["ImagePreprocessDuration_ms"].min; // Get min image preprocess duration

let inferenceDuration = model.timeStats.stats["CoreInferenceDuration_ms"]; // Get core inference duration (min, avg, max, count)
let inferenceMax = model.timeStats.stats["CoreInferenceDuration_ms"].max; // Get max core inference duration

let frameTotalDuration = model.timeStats.stats["FrameTotalDuration_ms"]; // Get total time taken for the entire frame processing

let deviceTemp = model.timeStats.stats["DeviceTemperature_C"]; // Get device temperature if available

model.resetTimeStats(); // Reset time stats

API Reference

For detailed information on the SDK's classes, methods, and properties, refer to the API Reference.