Welcome to the DeGirum JavaScript AI Inference SDK! This guide will help you get started with integrating AI inference capabilities into your web application. Follow the steps below to set up your environment, connect to the AI server, and run inference on an image.
The JavaScript AI Inference SDK allows you to connect to AI Server or Cloud Zoo instances, load AI models, and perform inference on various data types. This guide provides a step-by-step tutorial on how to use the SDK effectively.
To start using the SDK, include the following script tag in your HTML file:
<script src="https://docs.degirum.com/degirumjs/degirum-js.min.obf.js"></script>
Instantiate the dg_sdk
class and connect to the AI server using the connect
method:
let dg = new dg_sdk();
const AISERVER_IP = 'ws://localhost:8779';
let zoo = dg.connect(AISERVER_IP);
For running AI Server inference on cloud models, include the URL of the cloud zoo and your token:
let dg = new dg_sdk();
const AISERVER_IP = 'ws://localhost:8779';
const ZOO_URL = 'https://cs.degirum.com/degirum/public';
const secretToken = prompt('Enter secret token:');
let zoo = dg.connect(AISERVER_IP, ZOO_URL, secretToken);
For running Cloud inference, specify 'cloud' as the first argument, and include the URL of the cloud zoo and your token:
let dg = new dg_sdk();
const ZOO_URL = 'https://cs.degirum.com/degirum/public';
const secretToken = prompt('Enter secret token:');
let zoo = dg.connect('cloud', ZOO_URL, secretToken);
Now, you can load a model using the zoo class instance's loadModel
method:
const MODEL_NAME = 'yolo_v5s_face_det--512x512_quant_n2x_cpu_1';
const modelOptions = {
inputPadMethod: 'stretch'
};
let model = await zoo.loadModel(MODEL_NAME, modelOptions);
Use the predict
method to perform inference on an input image:
const image = '';
const result = await model.predict(image);
console.log('Result:', result);
You can display prediction results to a HTMLCanvasElement
:
// Assuming your Canvas Element has the id 'outputCanvas'
let canvas = document.getElementById('outputCanvas');
model.displayResultToCanvas(result, canvas);
To get started with a simple example page, we need the following HTML elements on the page:
The script tag to import DeGirumJS
A canvas element to display inference results.
An input element to browse and upload images.
Here is a HTML page that will perform inference on uploaded images and display the results:
<script src="https://docs.degirum.com/degirumjs/degirum-js.min.obf.js"></script>
<canvas id="outputCanvas" width="400" height="400"></canvas>
<input type="file" id="imageInput" accept="image/*">
<script type="module">
// Grab the outputCanvas and imageInput elements by ID:
const canvas = document.getElementById('outputCanvas');
const input = document.getElementById('imageInput');
// Initialize the SDK
let dg = new dg_sdk();
// Query the user for the cloud token:
const secretToken = prompt('Enter secret token:');
// Inference settings
const MODEL_NAME = 'yolo_v5s_face_det--512x512_quant_n2x_cpu_1';
const ZOO_URL = 'https://cs.degirum.com/degirum/public';
const AISERVER_IP = 'ws://localhost:8779';
// Connect to the cloud zoo
let zoo = dg.connect(AISERVER_IP, ZOO_URL, secretToken);
// Model options
const modelOptions = {
overlayShowProbabilities: true
};
// Load the model with the options
let model = await zoo.loadModel(MODEL_NAME, modelOptions);
// Function to run inference on uploaded files
input.onchange = async function () {
let file = input.files[0];
// Predict
let result = await model.predict(file);
console.log('Result from file:', result);
// Display result to canvas
model.displayResultToCanvas(result, canvas);
}
</script>
When loading a model, you can specify various options to customize its behavior:
inputCropPercentage
: Set the percentage to crop the input image.inputLetterboxFillColor
: Set the color for letterboxing.inputPadMethod
: Set the padding method for input.labelBlacklist
: Specify labels to exclude.labelWhitelist
: Specify labels to include.outputConfidenceThreshold
: Set the confidence threshold for outputs.outputMaxClassesPerDetection
: Set the maximum number of classes per detection.outputMaxDetections
: Set the maximum number of detections.outputMaxDetectionsPerClass
: Set the maximum number of detections per class.outputNmsThreshold
: Set the non-maximum suppression threshold.outputPoseThreshold
: Set the pose threshold.outputPostprocessType
: Set the post-process type.outputTopK
: Set the top K results to output.outputUseRegularNms
: Use regular non-maximum suppression.overlayAlpha
: Set the transparency of the overlay.overlayColor
: Set the color for overlay.overlayFontScale
: Set the font scale for overlay text.overlayLineWidth
: Set the width of the overlay lines.overlayShowLabels
: Show or hide labels in the overlay.overlayShowProbabilities
: Show or hide probabilities in the overlay.saveModelImage
: Flag to enable attaching the model image to the resultsTo destroy / clean up a model instance, use the cleanup
method:
await model.cleanup();
This will stop all running inferences and clean up resources used by the model instance.