LogoLogo
AI HubCommunityWebsite
  • Start Here
  • AI Hub
    • Overview
    • Quickstart
    • Teams
    • Device Farm
    • Browser Inference
    • Model Zoo
      • Hailo
      • Intel
      • MemryX
      • BrainChip
      • Google
      • DeGirum
      • Rockchip
    • View and Create Model Zoos
    • Model Compiler
    • PySDK Integration
  • PySDK
    • Overview
    • Quickstart
    • Installation
    • Runtimes and Drivers
      • Hailo
      • OpenVINO
      • MemryX
      • BrainChip
      • Rockchip
      • ONNX
    • PySDK User Guide
      • Core Concepts
      • Organizing Models
      • Setting Up an AI Server
      • Loading an AI Model
      • Running AI Model Inference
      • Model JSON Structure
      • Command Line Interface
      • API Reference Guide
        • PySDK Package
        • Model Module
        • Zoo Manager Module
        • Postprocessor Module
        • AI Server Module
        • Miscellaneous Modules
      • Older PySDK User Guides
        • PySDK 0.16.0
        • PySDK 0.15.2
        • PySDK 0.15.1
        • PySDK 0.15.0
        • PySDK 0.14.3
        • PySDK 0.14.2
        • PySDK 0.14.1
        • PySDK 0.14.0
        • PySDK 0.13.4
        • PySDK 0.13.3
        • PySDK 0.13.2
        • PySDK 0.13.1
        • PySDK 0.13.0
    • Release Notes
      • Retired Versions
    • EULA
  • DeGirum Tools
    • Overview
      • Streams
        • Streams Base
        • Streams Gizmos
      • Compound Models
      • Result Analyzer Base
      • Inference Support
  • DeGirumJS
    • Overview
    • Get Started
    • Understanding Results
    • Release Notes
    • API Reference Guides
      • DeGirumJS 0.1.3
      • DeGirumJS 0.1.2
      • DeGirumJS 0.1.1
      • DeGirumJS 0.1.0
      • DeGirumJS 0.0.9
      • DeGirumJS 0.0.8
      • DeGirumJS 0.0.7
      • DeGirumJS 0.0.6
      • DeGirumJS 0.0.5
      • DeGirumJS 0.0.4
      • DeGirumJS 0.0.3
      • DeGirumJS 0.0.2
      • DeGirumJS 0.0.1
  • Orca
    • Overview
    • Benchmarks
    • Unboxing and Installation
    • M.2 Setup
    • USB Setup
    • Thermal Management
    • Tools
  • Resources
    • External Links
Powered by GitBook

Get Started

  • AI Hub Quickstart
  • PySDK Quickstart
  • PySDK in Colab

Resources

  • AI Hub
  • Community
  • DeGirum Website

Social

  • LinkedIn
  • YouTube

Legal

  • PySDK EULA
  • Terms of Service
  • Privacy Policy

© 2025 DeGirum Corp.

On this page
  • degirum.postprocessor.ClassificationResults
  • image_overlay
  • overlay_show_labels_below
  • __str__
  • degirum.postprocessor.DetectionResults
  • image_overlay
  • __str__
  • generate_overlay_color(model_params, ...)
  • degirum.postprocessor.Hand_DetectionResults
  • image_overlay
  • __str__
  • degirum.postprocessor.InferenceResults
  • image
  • image_model
  • image_overlay
  • info
  • overlay_alpha
  • overlay_blur
  • overlay_color
  • overlay_fill_color
  • overlay_font_scale
  • overlay_line_width
  • overlay_show_labels
  • overlay_show_probabilities
  • results
  • type
  • __init__(*, ...)
  • __str__
  • generate_colors
  • generate_overlay_color(model_params, ...)
  • degirum.postprocessor.MultiLabelClassificationResults
  • image_overlay
  • overlay_show_labels_below
  • __str__
  • degirum.postprocessor.SegmentationResults
  • image_overlay
  • __str__
  • generate_overlay_color(model_params, ...)
  • degirum.postprocessor.create_postprocessor(*args, **kwargs)

Was this helpful?

  1. PySDK
  2. PySDK User Guide
  3. API Reference Guide

Postprocessor Module

PreviousZoo Manager ModuleNextAI Server Module

Last updated 14 days ago

Was this helpful?

degirum.postprocessor.ClassificationResults

Bases:

InferenceResult class implementation for classification results type

image_overlay

degirum.postprocessor.ClassificationResults.image_overlay

property

Image with AI inference results drawn. Image type is defined by the selected graphical backend. Each time this property is accessed, new overlay image object is created and all overlay details are redrawn according to the current settings of overlay_*** properties.

overlay_show_labels_below

degirum.postprocessor.ClassificationResults.overlay_show_labels_below

property writable

Specifies if overlay labels should be drawn below the image or on image itself

__str__

degirum.postprocessor.ClassificationResults.__str__()

Convert inference results to string

degirum.postprocessor.DetectionResults

InferenceResult class implementation for detection results type

image_overlay

degirum.postprocessor.DetectionResults.image_overlay

property

Image with AI inference results drawn. Image type is defined by the selected graphical backend.

__str__

degirum.postprocessor.DetectionResults.__str__()

Convert inference results to string

generate_overlay_color(model_params, ...)

degirum.postprocessor.DetectionResults.generate_overlay_color(model_params, label_dict)

staticmethod

Overlay colors generator.

Returns:

Type
Description

list

general overlay color data for segmentation results

degirum.postprocessor.Hand_DetectionResults

InferenceResult class implementation for pose detection results type

image_overlay

degirum.postprocessor.Hand_DetectionResults.image_overlay

property

Image with AI inference results drawn. Image type is defined by the selected graphical backend.

__str__

degirum.postprocessor.Hand_DetectionResults.__str__()

Convert inference results to string

degirum.postprocessor.InferenceResults

Inference results container class.

This class is a base class for a set of classes designed to handle inference results of particular model types such as classification, detection etc.

Note

image

degirum.postprocessor.InferenceResults.image

property

Original image.

image_model

degirum.postprocessor.InferenceResults.image_model

property

Model input image data: image converted to AI model input specifications.

Image type is raw binary array.

image_overlay

degirum.postprocessor.InferenceResults.image_overlay

property

Image with AI inference results drawn on a top of original image.

Drawing details depend on the inference result type:

  • For classification models the list of class labels with probabilities is printed below the original image.

  • For object detection models bounding boxes of detected object are drawn on the original image.

  • For pose detection models detected keypoints and keypoint connections are drawn on the original image.

  • For segmentation models detected segments are drawn on the original image.

info

degirum.postprocessor.InferenceResults.info

property

Input data frame information object.

overlay_alpha

degirum.postprocessor.InferenceResults.overlay_alpha

property writable

Alpha-blend weight for overlay details.

overlay_blur

degirum.postprocessor.InferenceResults.overlay_blur

property writable

Overlay blur option. None for no blur, "all" to blur all objects, a class label or list of class labels to blur specific objects.

overlay_color

degirum.postprocessor.InferenceResults.overlay_color

property writable

Color for inference results drawing on overlay image.

3-element RGB tuple or list of 3-element RGB tuples.

overlay_fill_color

degirum.postprocessor.InferenceResults.overlay_fill_color

property writable

Image fill color in case of image padding.

3-element RGB tuple.

overlay_font_scale

degirum.postprocessor.InferenceResults.overlay_font_scale

property writable

Font scale to use for overlay text.

overlay_line_width

degirum.postprocessor.InferenceResults.overlay_line_width

property writable

Line width in pixels for inference results drawing on overlay image.

overlay_show_labels

degirum.postprocessor.InferenceResults.overlay_show_labels

property writable

Specifies if class labels should be drawn on overlay image.

overlay_show_probabilities

degirum.postprocessor.InferenceResults.overlay_show_probabilities

property writable

Specifies if class probabilities should be drawn on overlay image.

results

degirum.postprocessor.InferenceResults.results

property

Inference results list.

Each element of the list is a dictionary containing information about one inference result. The dictionary contents depends on the AI model.

For classification models each inference result dictionary contains the following keys:

  • category_id: class numeric ID.

  • label: class label string.

  • score: class probability.

Example

[
    {'category_id': 0, 'label': 'cat', 'score': 0.99},
    {'category_id': 1, 'label': 'dog', 'score': 0.01}
]

For multi-label classification models each inference result dictionary contains the following keys:

  • classifier: object class string.

  • results: list of class labels and its scores. Scores are optional.

The results list element is a dictionary with the following keys:

  • label: class label string.

  • score: optional class label probability.

Example

[
    {
        'classifier': 'vehicle color',
        'results': [
            {'label': 'red', 'score': 0.99},
            {'label': 'blue', 'score': 0.01}
         ]
    },
    {
        'classifier': 'vehicle type',
        'results': [
            {'label': 'car', 'score': 0.99},
            {'label': 'truck', 'score': 0.01}
        ]
    }
]

For object detection models each inference result dictionary may contain the following keys:

  • category_id: detected object class numeric ID.

  • label: detected object class label string.

  • score: detected object probability.

  • bbox: detected object bounding box list [xtop, ytop, xbot, ybot].

  • landmarks: optional list of keypoints or landmarks. It is the list of dictionaries, one per each keypoint/landmark.

  • mask: optinal dictionary of run-length encoded (RLE) object segmentation mask array representation.

  • angle: optional angle (in radians) for rotating bounding box around its center. This is used in the case of oriented bounding boxes.

The landmarks list is defined for special cases like pose detection of face points detection results. Each landmarks list element is a dictionary with the following keys:

  • category_id: keypoint numeric ID.

  • label: keypoint label string.

  • score: keypoint detection probability.

  • landmark: keypoint coordinate list [x,y].

  • connect: optional list of IDs of connected keypoints.

The mask dictionary is defined for the special case of object segmentation results, with the following keys:

  • x_min: x-coordinate in the model input image at which the top-left corner of the box enclosing this mask should be placed.

  • y_min: y-coordinate in the model input image at which the top-left corner of the box enclosing this mask should be placed.

  • height: height of segmentation mask array

  • width: width of segmentation mask array

  • data: string representation of a buffer of unsigned 32-bit integers carrying the RLE segmentation mask array.

The object detection keys (bbox, score, label, and category_id) must be either all present or all absent. In the former case the result format is suitable to represent pure object detection results. In the later case, one of the following keys must be present:

  • the landmarks key

  • the mask key

The following statements are then true:

  • If the landmarks key is present, the result format is suitable to represent pure landmark detection results, such as pose detection.

  • If the mask key is present, the result format is suitable to represent pure segmentation results. If, optionally, the category_id key is also present, the result format is suitable to represent semantic segmentation results.

When both object detection keys and the landmarks key are present, the result format is suitable to represent mixed model results, when the model detects not only object bounding boxes, but also keypoints/landmarks within the bounding box.

When both object detection keys and the mask key are present, the result format is suitable to represent mixed model results, when the model detects not only object bounding boxes, but also segmentation masks within the bounding box (i.e. instance segmentation).

Example of pure object detection results:

Example

[
    {'category_id': 0, 'label': 'cat', 'score': 0.99, 'bbox': [10, 20, 100, 200]},
    {'category_id': 1, 'label': 'dog', 'score': 0.01, 'bbox': [200, 100, 300, 400]}
]

Example of oriented object detection results:

Example

[
    {'category_id': 0, 'label': 'car', 'score': 0.99, 'bbox': [10, 20, 100, 200], 'angle': 0.79}
]

Example of landmark object detection results:

Example

[
    {
        'landmarks': [
            {'category_id': 0, 'label': 'Nose', 'score': 0.99, 'landmark': [10, 20]},
            {'category_id': 1, 'label': 'LeftEye', 'score': 0.98, 'landmark': [15, 25]},
            {'category_id': 2, 'label': 'RightEye', 'score': 0.97, 'landmark': [18, 28]}
        ]
    }
]

Example of segmented object detection results:

Example

[
    {
        'mask': {'x_min': 1, 'y_min': 1, 'height': 2, 'width': 2, 'data': 'AAAAAAEAAAAAAAAAAQAAAAIAAAABAAAA'}
    }
]

For hand palm detection models each inference result dictionary contains the following keys:

  • score: probability of detected hand.

  • handedness: probability of right hand.

  • landmarks: list of dictionaries, one per each hand keypoint.

Each landmarks list element is a dictionary with the following keys:

  • label: classified object class label.

  • category_id: classified object class index.

  • landmark: landmark point coordinate list [x, y, z].

  • world_landmark: metric world landmark point coordinate list [x, y, z].

  • connect: list of adjacent landmarks indexes.

Example

[
    {
        'score': 0.99,
        'handedness': 0.98,
        'landmarks': [
            {
                'label': 'Wrist',
                'category_id': 0,
                'landmark': [10, 20, 30],
                'world_landmark': [10, 20, 30],
                'connect': [1]
            },
            {
                'label': 'Thumb',
                'category_id': 1,
                'landmark': [15, 25, 35],
                'world_landmark': [15, 25, 35],
                'connect': [0]
            }
        ]
    }
]

For segmentation models inference result is a single-element list. That single element is a dictionary, containing single key data. The value of this key is 2D numpy array of integers, where each integer value represents a class ID of the corresponding pixel. The class IDs are defined by the model label dictionary.

Example

[
    {
        'data': numpy.array([
            [0, 0, 0, 1, 1, 1],
            [0, 0, 0, 1, 1, 1],
            [0, 0, 0, 1, 1, 1],
            [2, 2, 2, 3, 3, 3],
            [2, 2, 2, 3, 3, 3],
            [2, 2, 2, 3, 3, 3],
        ])
    }
]

type

degirum.postprocessor.InferenceResults.type

property

Inference result type: one of

  • "classification"

  • "detection"

  • "pose detection"

  • "segmentation"

__init__(*, ...)

degirum.postprocessor.InferenceResults.__init__(*, model_params, input_image=None, model_image=None, inference_results, draw_color=(255, 255, 128), line_width=3, show_labels=True, show_probabilities=False, alpha='auto', font_scale=1.0, fill_color=(0, 0, 0), blur=None, frame_info=None, conversion, label_dictionary={})

Constructor.

Note

Parameters:

Name
Type
Description
Default

model_params

ModelParams

required

input_image

any

Original input data.

None

model_image

any

Input data converted per AI model input specifications.

None

inference_results

list

Inference results data.

required

draw_color

tuple

Color for inference results drawing on overlay image.

(255, 255, 128)

line_width

int

Line width in pixels for inference results drawing on overlay image.

3

show_labels

bool

True to draw class labels on overlay image.

True

show_probabilities

bool

True to draw class probabilities on overlay image.

False

alpha

Union[float, str]

Alpha-blend weight for overlay details.

'auto'

font_scale

float

Font scale to use for overlay text.

1.0

fill_color

tuple

RGB color tuple to use for filling if any form of padding is used.

(0, 0, 0)

blur

Union[str, list, None]

Optional blur parameter to apply to the overlay image. If None, no blur is applied. If "all" all objects are blurred. If a class label or a list of class labels is provided, only objects with those labels are blurred.

None

frame_info

any

Input data frame information object.

None

conversion

Callable

Coordinate conversion function accepting two arguments (x,y) and returning two-element tuple. This function should convert model-based coordinates to input image coordinates.

required

label_dictionary

dict[str, str]

Model label dictionary.

{}

__str__

degirum.postprocessor.InferenceResults.__str__()

Conversion to string

generate_colors

degirum.postprocessor.InferenceResults.generate_colors()

staticmethod

Generate a list of unique RGB color tuples.

generate_overlay_color(model_params, ...)

degirum.postprocessor.InferenceResults.generate_overlay_color(model_params, label_dict)

staticmethod

Overlay colors generator.

Parameters:

Name
Type
Description
Default

model_params

ModelParams

Model parameters.

required

label_dict

dict

Model labels dictionary.

required

Returns:

Type
Description

Union[list, tuple]

Overlay color tuple or list of tuples.

degirum.postprocessor.MultiLabelClassificationResults

InferenceResult class implementation for multi-label classification results type

image_overlay

degirum.postprocessor.MultiLabelClassificationResults.image_overlay

property

Image with AI inference results drawn. Image type is defined by the selected graphical backend. Each time this property is accessed, new overlay image object is created and all overlay details are redrawn according to the current settings of overlay_*** properties.

overlay_show_labels_below

degirum.postprocessor.MultiLabelClassificationResults.overlay_show_labels_below

property writable

Specifies if overlay labels should be drawn below the image or on image itself

__str__

degirum.postprocessor.MultiLabelClassificationResults.__str__()

Convert inference results to string

degirum.postprocessor.SegmentationResults

InferenceResult class implementation for segmentation results type

image_overlay

degirum.postprocessor.SegmentationResults.image_overlay

property

Image with AI inference results drawn. Image type is defined by the selected graphical backend.

__str__

degirum.postprocessor.SegmentationResults.__str__()

Convert inference results to string

generate_overlay_color(model_params, ...)

degirum.postprocessor.SegmentationResults.generate_overlay_color(model_params, label_dict)

staticmethod

Overlay colors generator.

Returns:

Type
Description

list

general overlay color data for segmentation results

degirum.postprocessor.create_postprocessor(*args, **kwargs)

Create and return postprocessor object.

Returns:

Type
Description

InferenceResults instance corresponding to model results type.

Bases:

Bases:

You never construct model objects yourself. Objects of those classes are returned by various predict methods of class.

Returned image object type is defined by the selected graphical backend (see ).

Returned image object type is defined by the selected graphical backend (see ).

You never construct InferenceResults objects yourself – the ancestors of this class are returned as results of AI inferences from , , and methods.

Model parameters object as returned by .

Bases:

Bases:

For the list of arguments see documentation for constructor of class.

InferenceResults
InferenceResults
InferenceResults
InferenceResults
degirum.postprocessor.InferenceResults
InferenceResults
InferenceResults
degirum.model.Model
degirum.model.Model.image_backend
degirum.model.Model.image_backend
degirum.model.Model.predict
degirum.model.Model.predict_batch
degirum.model.Model.predict_dir
degirum.model.Model.model_info

This API Reference is based on PySDK 0.16.1.