Release Notes

Changelog of DeGirumJS releases.

Version 0.1.4 (7/7/2025)

New Features and Modifications

  1. Performance Optimizations:

    • Improved the performance of predict and predict_batch by optimizing the handling of internal asynchronous operations and removing internal queue overhead.

    • Internal timeout mechanisms have been optimized.

  2. New Model Parameters:

Parameter
Type
Description

eagerBatchSize

integer

Controls server-side maximum batch size. Use this to improve throughput on some models.

outputPoseThreshold

float

A dedicated confidence threshold for pose estimation models.

outputPostprocessType

string

The list of valid post-processing types has been expanded for full compatibility.

inputShape

Array<Array>

(Advanced) Allows you to get or set the input shapes for models. The format is an array of shape arrays, e.g., [[1, 224, 224, 3]].

  1. New Timing Statistics

  • Added timing statistics to be able to profile the full inference lifecycle of a frame inside DeGirumJS. The model.getTimeStats() method now includes more detailed metrics to help you pinpoint performance bottlenecks:

      InputFrameConvert_ms: Time spent converting the input image to a usable format.
      EncodeEmit_ms: Time spent encoding and sending the payload.
      ResultProcessing_ms: Time spent on the client processing the result from the server.
      ResultQueueWaitingTime_ms: Time a result spent in the queue before being returned to your code.
      MutexWait_ms: Time spent waiting for the prediction lock (for single predict calls).
  1. New model.printLatencyInfo() Method

  • After running inference with measureTime enabled, you can call this new method to get a clean, human-readable breakdown of where time is being spent:

      Total End-to-End Latency
      Total Client-Side Processing Time (preprocessing, etc.)
      Total Server-Side Processing Time (inference, etc.)

Bug Fixes

  1. Cloud Model Stability: Automatic Parameter Hydration

    • When a model is loaded from the cloud, the SDK now automatically hydrates the partial parameters received from the server, filling in any missing values with their correct defaults. This now lets CloudServerModel instances have any parameter be modified.


Version 0.1.3 (1/8/2025)

New Features and Modifications

  1. New drawing parameters for autoScaleDrawing in the model classes

    • Added two optional parameters, targetDisplayWidth and targetDisplayHeight, to specify a custom reference resolution when autoScaleDrawing is enabled. (previously, the reference resolution was fixed at 1920x1080)

    • Defaults to 1920x1080 if no values are provided.

    • Ensures consistent scaling of overlays (e.g., bounding boxes, labels, keypoints) across varying input image dimensions.

Bug Fixes

  1. Fixed a bug where backend errors were thrown asynchronously from predict and predict_batch functions. Now, the user can catch these errors and handle them gracefully.


Version 0.1.2 (1/7/2025)

New Features and Modifications

  1. Lightweight listModels() function: Now, querying the list of models from the cloud (for CloudZoo classes) only fetches the names of the models. The parameters can now be fetched with a new function: getModelInfo(modelName).

  2. Updated autoScaleDrawing parameter for model classes displayResultToCanvas() function: Now, the parameter is made to scale all results to optimal viewing for 1080p resolution. autoScaleDrawing saves you from guesswork about how to size overlays for various input image dimensions by comparing the actual canvas size to a reference (e.g., 1080p) and scaling accordingly.


Version 0.1.1 (12/31/2024)

New Features and Modifications

  1. Asynchronous dg.connect(...) The dg.connect(...) method is now asynchronous. You should use await dg.connect(...) to properly wait for initialization. This improvement ensures the AI Server or Cloud connections (and their respective zoo classes) are fully ready before returning objects.

    let dg = new dg_sdk();
    // Old:
    // let zoo = dg.connect('ws://localhost:8779');
    // New:
    let zoo = await dg.connect('ws://localhost:8779');

2. **ReadableStream Support in `predict_batch`** Both `AIServerModel` and `CloudServerModel` now accept a ReadableStream in addition to an async iterable for the `predict_batch(...)` method. This makes it easier to stream frames or data chunks directly from sources like the new WebCodecs API or other stream-based pipelines.

  1. predict() and predict_batch() Accept VideoFrame These methods now also allow VideoFrame objects as valid inputs.

  2. OffscreenCanvas Support in displayResultToCanvas() You can now draw inference results onto an OffscreenCanvas as well as a standard <canvas> element.

  3. Brighter Overlay Colors Default generated overlay colors have been adjusted to be more visible on dark backgrounds.

  4. Support for SegmentationYoloV8 Postprocessing Added the ability to draw results from models that use the SegmentationYoloV8 postprocessor.

Bug Fixes

  1. Proper Overlay Color for Age Classification Overlay colors for per-person text in age classification models are now correctly set.

  2. Postprocessing Improvements Various fixes and optimizations have been implemented in the postprocessing code.


Version 0.1.0 (10/4/2024)

New Features and Modifications

  1. Optimized Cloud inference connection handling, now resources are used only when needed and released properly.

  2. New default color generation logic creates a more visually appealing set of colors for different types of models when viewing inference results.


Version 0.0.9 (9/17/2024)

New Features and Modifications

  1. Optimized Mask Drawing in displayResultToCanvas() for results from Detection models with masks per detected object.

Bug Fixes

  1. Postprocessing for Detection models that return masks now handles inputPadMethod options properly.


Last updated

Was this helpful?