# Getting Started

Welcome to DeGirumJS, a JavaScript AI inference SDK. This guide helps you integrate AI capabilities into your web application.

## Introduction

DeGirumJS allows you to connect to AI Server or Cloud Zoo instances, load AI models, and perform inference on various data types. This guide provides a step-by-step tutorial on how to get started.

## Core Concepts

There are 3 main objects that you will work with in DeGirumJS:

* **dg\_sdk**: The main entry point to the library.
* **zoo**: Your connection to a model repository (either local or cloud). You use this to find and load models.
* **model**: The loaded model instance that you use to run predictions.

## Setup

### Import the SDK

To start using the SDK, include the following script tag in your HTML file:

{% code overflow="wrap" %}

```html
<script src="https://assets.degirum.com/degirumjs/0.1.5/degirum-js.min.obf.js"></script>
```

{% endcode %}

## A 5-Step Guide to Your First Prediction

DeGirumJS allows you to load models from an AI server or Cloud Zoo and perform inference on the AI Server hardware or in the cloud.

{% hint style="info" %}
For local or LAN inference, run the AI Server with HTTP enabled:

{% code overflow="wrap" %}

```bash
degirum server --protocol both
```

{% endcode %}

[Click here for AI server documentation.](https://docs.degirum.com/pysdk/user-guide-pysdk/setting-up-an-ai-server)
{% endhint %}

For running cloud inference or to be able to load a model from the cloud, you need to specify your cloud token.

[Where do I get my cloud token?](https://docs.degirum.com/ai-hub/workspaces/workspace-tokens)

{% stepper %}
{% step %}
**Connect to an Inference Provider**

**Connect to an AI Server**

Instantiate the `dg_sdk` class and connect to the AI server using the `connect` method. Provide the server's IP address and port.

{% code overflow="wrap" %}

```javascript
let dg = new dg_sdk();
const AISERVER_IP = 'localhost:8779';

let zoo = await dg.connect(AISERVER_IP);
```

{% endcode %}

Have an AIServer running but want to use cloud models? For running AI Server inference on *cloud models*, include the URL of the cloud zoo and your token:

{% code overflow="wrap" %}

```javascript
let dg = new dg_sdk();
const AISERVER_IP = 'localhost:8779';
const ZOO_URL = 'https://cs.degirum.com/degirum/public';
const secretToken = prompt('Enter secret token:');

let zoo = await dg.connect(AISERVER_IP, ZOO_URL, secretToken);
```

{% endcode %}

**Connect to the Cloud**

For running Cloud inference, specify 'cloud' as the first argument, and include the URL of the cloud zoo and your token:

{% code overflow="wrap" %}

```javascript
let dg = new dg_sdk();
const ZOO_URL = 'https://cs.degirum.com/degirum/public';
const secretToken = prompt('Enter secret token:');

let zoo = await dg.connect('cloud', ZOO_URL, secretToken);
```

{% endcode %}
{% endstep %}

{% step %}
**Load a Model**

Now, you can load a model using the zoo class instance's `loadModel` method:

{% code overflow="wrap" %}

```javascript
const MODEL_NAME = 'yolo_v5s_coco--512x512_quant_n2x_cpu_1';
const modelOptions = {
    overlayShowProbabilities: true
    // Any other custom options for your Model (see Model Options documentation)
};

let model = await zoo.loadModel(MODEL_NAME, modelOptions);
```

{% endcode %}

You can use `zoo.listModels()` as a way to discover models available for inference on the selected inference provider.
{% endstep %}

{% step %}
**Run Inference**

Use the `predict` method to perform inference on an input image. The input for `predict` is flexible and supports a variety of types, including Blob, File, base64 string, HTMLImageElement, HTMLVideoElement, HTMLCanvasElement, ArrayBuffer, TypedArray, ImageBitmap, URL to an image (full list of supported input types can be found in the Working with Input and Output Data documentation).

{% code overflow="wrap" %}

```javascript
const image = ''; // Some input image
const result = await model.predict(image);
console.log('Result:', result);
```

{% endcode %}
{% endstep %}

{% step %}
**Understand the Output**

The result object contains the results from the model and the original imageFrame. For more details, see the [Result Object Structure](/degirumjs/guides/result-object-structure.md) documentation.
{% endstep %}

{% step %}
**Visualize the Results**

You can display prediction results to a `HTMLCanvasElement` or `OffscreenCanvas`:

{% code overflow="wrap" %}

```javascript
// Assuming your Canvas Element has the id 'outputCanvas'
let canvas = document.getElementById('outputCanvas');
model.displayResultToCanvas(result, canvas);
```

{% endcode %}

This will draw the inference results onto the canvas.
{% endstep %}
{% endstepper %}

## Putting It All Together: A Complete Example

To get started with a simple example page, we need the following HTML elements on the page:

* The script tag to import DeGirumJS
* A canvas element to display inference results
* An input element to browse and upload images.

Here is a HTML page that will perform inference on uploaded images and display the results:

{% code overflow="wrap" %}

```html
<script src="https://assets.degirum.com/degirumjs/0.1.5/degirum-js.min.obf.js"></script>
<canvas id="outputCanvas" width="400" height="400"></canvas>
<input type="file" id="imageInput" accept="image/*">
<script type="module">
    // Grab the outputCanvas and imageInput elements by ID:
    const canvas = document.getElementById('outputCanvas');
    const input = document.getElementById('imageInput');
    
    // Initialize the SDK
    let dg = new dg_sdk();
    // Query the user for the cloud token:
    const secretToken = prompt('Enter your cloud token:');
    // Inference settings
    const MODEL_NAME = 'yolo_v5s_coco--512x512_quant_n2x_cpu_1';
    const ZOO_URL = 'https://cs.degirum.com/degirum/public';
    const AISERVER_IP = 'localhost:8779';
    
    // Connect to the cloud zoo
    let zoo = await dg.connect(AISERVER_IP, ZOO_URL, secretToken);
    
    // Model options
    const modelOptions = {
        overlayShowProbabilities: true
    };
    // Load the model with the options
    let model = await zoo.loadModel(MODEL_NAME, modelOptions);
    
    // Function to run inference on uploaded files
    input.onchange = async function () {
        let file = input.files[0];
        // Predict
        let result = await model.predict(file);
        console.log('Result from file:', result);
        // Display result to canvas
        model.displayResultToCanvas(result, canvas);
    }
</script>
```

{% endcode %}

## Cleaning Up

To clean up a model instance to release resources when the model is no longer needed, use the `cleanup` method:

{% code overflow="wrap" %}

```javascript
await model.cleanup();
```

{% endcode %}

This will stop all running inferences and clean up resources used by the model instance.

## Where to Go Next

Continue exploring the following topics to learn more about DeGirumJS.

* [Model Parameters](/degirumjs/guides/model-parameters.md)
* [Connection Modes](/degirumjs/guides/connection-modes.md)
* [Real-Time Batch Inference](/degirumjs/guides/batch-inference.md)
* [Performance & Timing Statistics](/degirumjs/guides/timing.md)
* [Customizing Pre-processing and Visual Overlays](/degirumjs/guides/pre-post-processing.md)
* [Working with Input and Output Data](/degirumjs/guides/input-output-data.md)
* [Device Management for Inference](/degirumjs/guides/device-management.md)
* [Result Object Structure + Examples](/degirumjs/guides/result-object-structure.md)
* [WebCodecs Example](/degirumjs/guides/web-codecs-example.md)
* [Release Notes](/degirumjs/all-release-notes.md)

## API Reference

For detailed information on the SDK's classes, methods, and properties, refer to the [API Reference](https://assets.degirum.com/degirumjs/0.1.5/api/index.html).


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.degirum.com/degirumjs/get-started.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
