Preprocessing and Visual Overlays
Customize preprocessing and drawing parameters for DeGirumJS models.
DeGirumJS provides a rich set of user-facing properties that allow you to precisely control how input data is handled before inference (pre-processing) and how inference results are visually presented (overlays). This flexibility enables you to tailor the SDK's behavior to your specific application needs and aesthetic preferences.
The following examples show the parameters set as options when you load a model using zoo.loadModel()
. You can always set the parameters by simply invoking the corresponding setter methods on a model instance after it has been loaded.
Input Handling (Pre-processing)
The SDK automatically resizes and prepares your input images to match the dimensions required by the AI model. You can customize this process using the following parameters:
inputPadMethod
This parameter determines how the input image is scaled and positioned within the model's input frame. It determines how your input image is sent to the model.
'letterbox' (Default)
The image is resized to fit within the model's input dimensions while preserving its original aspect ratio. Any empty space (padding) around the image is filled with the color specified by inputLetterboxFillColor
. This method prevents distortion and is generally recommended for most vision models.
let model = await zoo.loadModel('your_model', { inputPadMethod: 'letterbox' });
'stretch'
The image is stretched or shrunk to exactly match the model's input dimensions, regardless of its original aspect ratio. This can lead to image distortion but ensures the entire image fills the input frame.
let model = await zoo.loadModel('your_model', { inputPadMethod: 'stretch' });
'crop-first'
The image is first cropped to match the aspect ratio of the model's input, and then resized. The inputCropPercentage
determines how much of the original image is retained.
let model = await zoo.loadModel('your_model', { inputPadMethod: 'crop-first', inputCropPercentage: 0.9 });
'crop-last'
The image is resized first, and then cropped to fit the model's input dimensions.
let model = await zoo.loadModel('your_model', { inputPadMethod: 'crop-last', inputCropPercentage: 0.9 });
inputLetterboxFillColor
When inputPadMethod
is 'letterbox'
, this parameter sets the RGB color of the padded areas.
Type: Array<number>
(e.g., [R, G, B]
, where each component is 0-255)
Default: [0, 0, 0]
(black)
let model = await zoo.loadModel('your_model', {
inputPadMethod: 'letterbox',
inputLetterboxFillColor: [255, 0, 0] // Red letterbox
});
inputCropPercentage
This parameter is used in conjunction with 'crop-first'
and 'crop-last'
inputPadMethod
values. It specifies the percentage of the image (after initial scaling for 'crop-last'
) that should be retained after cropping.
Type: number
(between 0 and 1)
Default: 1.0
let model = await zoo.loadModel('your_model', {
inputPadMethod: 'crop-first',
inputCropPercentage: 0.8 // Retain 80% of the cropped image
});
Overlay Customization
The model.displayResultToCanvas()
method draws visual overlays (like bounding boxes, labels, and keypoints) on a canvas. You can customize the appearance of these overlays using the following parameters:
overlayAlpha
Controls the transparency of the drawn overlays. A value of 1.0
means fully opaque, while 0.0
means fully transparent.
Type: number
(between 0 and 1)
Default: 0.75
let model = await zoo.loadModel('your_model', { overlayAlpha: 0.5 }); // 50% transparent overlays
overlayColor
Sets the color(s) for drawing overlays. You can provide a single RGB triplet for a uniform color or an array of RGB triplets to cycle through different colors for different detected objects/classes.
Type: Array<number>
(single [R, G, B]
) or Array<Array<number>>
(multiple [[R, G, B], ...]
)
Default: [-1, -1, -1]
(triggers automatic generation of distinct, bright colors)
let model = await zoo.loadModel('your_model', { overlayColor: [255, 0, 0] }); // All overlays will be red
let model = await zoo.loadModel('your_model', {
overlayColor: [[255, 0, 0], [0, 255, 0], [0, 0, 255]] // Cycles red, green, blue
});
overlayFontScale
Adjusts the size of text labels (e.g., class names, probabilities) drawn on the overlay. A value of 1.0
is the default size.
Type: number
(positive number)
Default: 1.0
let model = await zoo.loadModel('your_model', { overlayFontScale: 1.5 }); // 50% larger text
overlayLineWidth
Sets the width of lines used in overlays, such as bounding box borders and connections in pose detection.
Type: number
(positive number)
Default: 2
let model = await zoo.loadModel('your_model', { overlayLineWidth: 4 }); // Thicker lines
overlayShowLabels
A boolean flag to control the visibility of text labels (e.g., "person", "car") on the overlay.
Type: boolean
Default: true
let model = await zoo.loadModel('your_model', { overlayShowLabels: false }); // Hide labels
overlayShowProbabilities
A boolean flag to control the visibility of confidence scores (probabilities) alongside labels on the overlay.
Type: boolean
Default: false
let model = await zoo.loadModel('your_model', { overlayShowProbabilities: true }); // Show probabilities
autoScaleDrawing
When set to true
, the SDK automatically scales the drawn overlays (bounding boxes, labels, keypoints) to appear consistent regardless of the input image's original dimensions or the canvas size. It uses targetDisplayWidth
and targetDisplayHeight
as reference.
Type: boolean
Default: false
let model = await zoo.loadModel('your_model', { autoScaleDrawing: true });
targetDisplayWidth / targetDisplayHeight
These optional parameters are used in conjunction with autoScaleDrawing
. They define a reference canvas size (e.g., 1920x1080
) against which the overlay elements are scaled. If your target canvas size is different from the default, you can adjust these values to ensure optimal visual presentation.
Type: number
Defaults: 1920
for width, 1080
for height
let model = await zoo.loadModel('your_model', {
autoScaleDrawing: true,
targetDisplayWidth: 1280,
targetDisplayHeight: 720
});
Label Filtering
You can control which detected objects or classification results are included in the final output by filtering them based on their labels. This is particularly useful when you are only interested in a subset of the classes a model can detect.
labelBlacklist
An array of strings. Any result whose label
matches an entry in this list will be excluded from the final output.
Type: Array<string>
Default: null
(no labels are blacklisted by default)
let model = await zoo.loadModel('your_model', { labelBlacklist: ['cat', 'dog'] }); // Exclude cats and dogs
labelWhitelist
An array of strings. If this list is provided, only results whose label
matches an entry in this list will be included in the final output. All other labels will be filtered out.
Type: Array<string>
Default: null
(no labels are whitelisted by default, all are included unless blacklisted)
let model = await zoo.loadModel('your_model', { labelWhitelist: ['person', 'car'] }); // Only include persons and cars
Last updated
Was this helpful?