LogoLogo
AI HubCommunityWebsite
  • Start Here
  • AI Hub
    • Overview
    • Quickstart
    • Teams
    • Device Farm
    • Browser Inference
    • Model Zoo
      • Hailo
      • Intel
      • MemryX
      • BrainChip
      • Google
      • DeGirum
      • Rockchip
    • View and Create Model Zoos
    • Model Compiler
    • PySDK Integration
  • PySDK
    • Overview
    • Quickstart
    • Installation
    • Runtimes and Drivers
      • Hailo
      • OpenVINO
      • MemryX
      • BrainChip
      • Rockchip
      • ONNX
    • PySDK User Guide
      • Core Concepts
      • Organizing Models
      • Setting Up an AI Server
      • Loading an AI Model
      • Running AI Model Inference
      • Model JSON Structure
      • Command Line Interface
      • API Reference Guide
        • PySDK Package
        • Model Module
        • Zoo Manager Module
        • Postprocessor Module
        • AI Server Module
        • Miscellaneous Modules
      • Older PySDK User Guides
        • PySDK 0.16.0
        • PySDK 0.15.2
        • PySDK 0.15.1
        • PySDK 0.15.0
        • PySDK 0.14.3
        • PySDK 0.14.2
        • PySDK 0.14.1
        • PySDK 0.14.0
        • PySDK 0.13.4
        • PySDK 0.13.3
        • PySDK 0.13.2
        • PySDK 0.13.1
        • PySDK 0.13.0
    • Release Notes
      • Retired Versions
    • EULA
  • DeGirum Tools
    • Overview
      • Streams
        • Streams Base
        • Streams Gizmos
      • Compound Models
      • Result Analyzer Base
      • Inference Support
  • DeGirumJS
    • Overview
    • Get Started
    • Understanding Results
    • Release Notes
    • API Reference Guides
      • DeGirumJS 0.1.3
      • DeGirumJS 0.1.2
      • DeGirumJS 0.1.1
      • DeGirumJS 0.1.0
      • DeGirumJS 0.0.9
      • DeGirumJS 0.0.8
      • DeGirumJS 0.0.7
      • DeGirumJS 0.0.6
      • DeGirumJS 0.0.5
      • DeGirumJS 0.0.4
      • DeGirumJS 0.0.3
      • DeGirumJS 0.0.2
      • DeGirumJS 0.0.1
  • Orca
    • Overview
    • Benchmarks
    • Unboxing and Installation
    • M.2 Setup
    • USB Setup
    • Thermal Management
    • Tools
  • Resources
    • External Links
Powered by GitBook

Get Started

  • AI Hub Quickstart
  • PySDK Quickstart
  • PySDK in Colab

Resources

  • AI Hub
  • Community
  • DeGirum Website

Social

  • LinkedIn
  • YouTube

Legal

  • PySDK EULA
  • Terms of Service
  • Privacy Policy

© 2025 DeGirum Corp.

On this page

Was this helpful?

  1. DeGirumJS

Understanding Results

This page explains the structure and content of inference result objects returned by the AIServerModel and CloudServerModel classes.

The AIServerModel and CloudServerModel classes return a result object that contains the inference results from the predict and predict_batch functions.

Example:

let someResult = await someModel.predict(image);
console.log(someResult);

For example, the result can be structured like this:

{
    "result": [
        [
            { "category_id": 1, "label": "foo", "score": 0.2 },
            { "category_id": 0, "label": "bar", "score": 0.1 }
        ],
        "frame123"
    ],
    "imageFrame": imageBitmap
}

Accessing the Result Data

  • Inference Results: Access the main results using someResult.result[0].

  • Frame Info / Number: Get the frame information or frame number using someResult.result[1].

  • Original Input Image: Access the original input image with someResult.imageFrame.

Inference Result Types

The inference results can be one of the following types:

  • Detection

  • Classification

  • Pose Detection

  • Segmentation

Example Results

  1. Detection Result

Detection results include bounding boxes (bbox) along with category IDs, labels, and confidence scores for detected objects:

{
    "result": [
        [
            {
                "bbox": [101.98, 77.67, 175.04, 232.99],
                "category_id": 0,
                "label": "face",
                "score": 0.856
            },
            {
                "bbox": [314.91, 52.55, 397.32, 228.70],
                "category_id": 0,
                "label": "face",
                "score": 0.844
            }
        ],
        "frame_15897"
    ],
    "imageFrame": {}
}

In this example:

  • bbox represents the coordinates for each detected object's bounding box.

  • category_id is the numerical ID of the detected category.

  • label is the label of the detected category.

  • score represents the confidence of the detection.

  1. Pose Detection Result

Pose detection results include landmarks, with each landmark having coordinates (x, y), labels, and confidence scores:

{
    "result": [
        [
            {
                "landmarks": [
                    { "category_id": 0, "label": "Nose", "landmark": [93.99, 115.81], "score": 0.9986 },
                    { "category_id": 1, "label": "LeftEye", "landmark": [110.31, 98.96], "score": 0.9988 }
                ],
                "score": 0.4663
            }
        ],
        "frame_18730"
    ],
    "imageFrame": {}
}

In this example:

  • landmarks represent detected joints or points of interest with coordinates (x, y), a label, and a confidence score.

  • The connect field (if present) indicates which landmarks should be connected in visualizations.

  1. Classification Result

Classification results include category IDs, labels, and confidence scores but typically don’t have bounding boxes:

{
    "result": [
        [
            { "category_id": 401, "label": "academic gown, academic robe, judge's robe", "score": 0.8438 },
            { "category_id": 618, "label": "lab coat, laboratory coat", "score": 0.0352 }
        ],
        "frame_19744"
    ],
    "imageFrame": {}
}

In this example:

  • The model classifies the image into categories with associated confidence scores.

PreviousGet StartedNextRelease Notes

Last updated 2 months ago

Was this helpful?