LogoLogo
AI HubCommunityWebsite
  • Start Here
  • AI Hub
    • Overview
    • Quickstart
    • Teams
    • Device Farm
    • Browser Inference
    • Model Zoo
      • Hailo
      • Intel
      • MemryX
      • BrainChip
      • Google
      • DeGirum
      • Rockchip
    • View and Create Model Zoos
    • Model Compiler
    • PySDK Integration
  • PySDK
    • Overview
    • Quickstart
    • Installation
    • Runtimes and Drivers
      • Hailo
      • OpenVINO
      • MemryX
      • BrainChip
      • Rockchip
      • ONNX
    • PySDK User Guide
      • Core Concepts
      • Organizing Models
      • Setting Up an AI Server
      • Loading an AI Model
      • Running AI Model Inference
      • Model JSON Structure
      • Command Line Interface
      • API Reference Guide
        • PySDK Package
        • Model Module
        • Zoo Manager Module
        • Postprocessor Module
        • AI Server Module
        • Miscellaneous Modules
      • Older PySDK User Guides
        • PySDK 0.16.0
        • PySDK 0.15.2
        • PySDK 0.15.1
        • PySDK 0.15.0
        • PySDK 0.14.3
        • PySDK 0.14.2
        • PySDK 0.14.1
        • PySDK 0.14.0
        • PySDK 0.13.4
        • PySDK 0.13.3
        • PySDK 0.13.2
        • PySDK 0.13.1
        • PySDK 0.13.0
    • Release Notes
      • Retired Versions
    • EULA
  • DeGirum Tools
    • Overview
      • Streams
        • Streams Base
        • Streams Gizmos
      • Compound Models
      • Result Analyzer Base
      • Inference Support
  • DeGirumJS
    • Overview
    • Get Started
    • Understanding Results
    • Release Notes
    • API Reference Guides
      • DeGirumJS 0.1.3
      • DeGirumJS 0.1.2
      • DeGirumJS 0.1.1
      • DeGirumJS 0.1.0
      • DeGirumJS 0.0.9
      • DeGirumJS 0.0.8
      • DeGirumJS 0.0.7
      • DeGirumJS 0.0.6
      • DeGirumJS 0.0.5
      • DeGirumJS 0.0.4
      • DeGirumJS 0.0.3
      • DeGirumJS 0.0.2
      • DeGirumJS 0.0.1
  • Orca
    • Overview
    • Benchmarks
    • Unboxing and Installation
    • M.2 Setup
    • USB Setup
    • Thermal Management
    • Tools
  • Resources
    • External Links
Powered by GitBook

Get Started

  • AI Hub Quickstart
  • PySDK Quickstart
  • PySDK in Colab

Resources

  • AI Hub
  • Community
  • DeGirum Website

Social

  • LinkedIn
  • YouTube

Legal

  • PySDK EULA
  • Terms of Service
  • Privacy Policy

© 2025 DeGirum Corp.

On this page
  • JSON Overview
  • Example JSON Configuration
  • General Parameters
  • Device Parameters
  • Model Parameters
  • Preprocessing Parameters
  • Input Configuration
  • Image Format & Manipulation
  • Normalization
  • Quantization
  • Postprocessing Parameters
  • General Behavior
  • Thresholds & Alignment
  • Classification-Specific
  • Detection-Specific

Was this helpful?

  1. PySDK
  2. PySDK User Guide

Model JSON Structure

This page outlines model JSON structure and parameters.

PreviousRunning AI Model InferenceNextCommand Line Interface

Last updated 1 month ago

Was this helpful?

JSON Overview

All models in model zoos are paired with JSON configuration files that describe the model type, its intended function, the runtime environment it is compiled for, and its preprocessing and postprocessing settings.

We can organize these parameters into five sections:

Incorrectly setting these parameters may decrease precision or performance.

Example JSON Configuration

Below is an example JSON configuration for reference. Note that you can include or omit parameters as needed:

{
  "ConfigVersion": <config_version_number>,
  "Checksum": "<checksum>",
  "DEVICE": [
    {
      "DeviceType": "<device_type>",
      "RuntimeAgent": "<runtime_agent>",
      "SupportedDeviceTypes": "<supported_device_types>"
    }
  ],
  "PRE_PROCESS": [
    {
      "InputN": <input_N>,
      "InputH": <input_H>,
      "InputW": <input_W>,
      "InputC": <input_C>,
      "InputQuantEn": <boolean_for_quant_enabled>
    }
  ],
  "MODEL_PARAMETERS": [
    {
      "ModelPath": "<path_to_model>"
    }
  ],
  "POST_PROCESS": [
    {
      "OutputPostprocessType": "<postprocess_type>",
      "OutputNumClasses": <number_of_output_classes>,
      "LabelsPath": "<path_to_labels_json>"
    }
  ]
}

General Parameters

General parameters are located at the top of the JSON file. They indicate the configuration file version and the checksum of the model binary for all models.

Parameter
Type
Mandatory

ConfigVersion

int

yes

Checksum

string

yes

  • ConfigVersion The version of the JSON configuration file. The current JSON config version is 10. It is verified against the minimum compatible and current framework software versions. If the version is not within the acceptable range, a version-check runtime exception is generated during model loading.

  • Checksum The checksum of the model binary file.


Device Parameters

Section name in JSON: DEVICE

Parameter
Type
Mandatory

DeviceType

string

yes

RuntimeAgent

string

yes

SupportedDeviceTypes

string

yes

  • DeviceType The type of device on which the model will run.

  • RuntimeAgent The runtime agent responsible for executing the model.

  • SupportedDeviceTypes Lists the device types that are supported by the model. Refer to the Supported Hardware documentation for details on device compatibility.


Model Parameters

Section name in JSON: MODEL_PARAMETERS

These parameters control how the model operates.

Parameter
Type
Mandatory
Models

ModelPath

string

yes

All

  • ModelPath The path to a model file.


Preprocessing Parameters

Section name in JSON: PRE_PROCESS

These parameters control settings used to prepare and transform input data before it is fed into the model, ensuring proper formatting and normalization. This section may contain multiple elements (one per input tensor in multi-input networks).

Input Configuration

Fundamental properties of the input data, including its type, dimensions, and layout.

Parameter
Type
Mandatory
Default
Input Type

InputN

int

yes

(none)

All

InputH

int

yes

(none)

All

InputW

int

yes

(none)

All

InputC

int

yes

(none)

All

InputType

string

No

"Image"

All

InputShape

int array

No

(none)

All

InputRawDataType

string

No

"DG_UINT8"

All

InputTensorLayout

string

No

"NHWC"

Image

  • InputN The batch size for the input data tensor.

  • InputH The height of the input data tensor.

  • InputW The width of the input data tensor.

  • InputC The number of channels in the input data tensor.

  • InputType The model input type. The dimension order is defined by InputTensorLayout. This can be set to:

    • Image

    • Tensor

  • InputShape The shape of the input data tensor in the format[<N>, <H>, <W>, <C>]. You may specify the shape with this parameter or with the InputN, InputH, InputW, and InputC parameters.

  • InputRawDataType The data type of raw binary tensor elements (how the preprocessor treats client data). This is a runtime parameter that can be changed on the fly. This can be set to:

    • DG_UINT8 (unsigned 8-bit integer)

    • DG_FLT (32-bit floating point),

    • DG_INT16 (signed 16-bit integer).

  • InputTensorLayout The dimensional layout of the raw binary tensor for inputs of raw image type and raw tensor type. This can be set to:

    • auto

    • NHWC

    • NCHW


Image Format & Manipulation

Governs the image input format, color space, resizing, padding, cropping, and slicing operations. These parameters are needed only when InputType is Image.

Parameter
Type
Mandatory
Default

ImageBackend

string

No

"auto"

InputResizeMethod

string

No

"bilinear"

InputPadMethod

string

No

"letterbox"

InputCropPercentage

double

No

1.0

InputImgFmt

string

No

"JPEG"

InputColorSpace

string

No

"RGB"

  • ImageBackend The Python package used for image processing. When this is set to auto, the OpenCV backend will be tried first. This can be set to:

    • auto

    • pil

    • opencv

  • InputResizeMethod The interpolation algorithm used for image resizing. This can be set to:

    • nearest

    • bilinear

    • area

    • bicubic

    • lanczos

  • InputPadMethod Specifies how the input image is padded or cropped during resizing. This can be set to:

    • stretch

    • letterbox

    • crop-first

    • crop-last

  • InputCropPercentage The crop percentage when InputPadMethod is set to crop-first or crop-last.

  • InputImgFmt The image format for image inputs. Data type is defined by InputRawDataType. This can be set to:

    • JPEG

    • RAW

  • InputColorSpace The color space required for image inputs. This can be set to RGB or BGR. If InputImgFmt is JPEG, the preprocessor automatically handles color conversion; if RAW, the raw tensor must be arranged accordingly.


Normalization

Defines how input data is normalized, including scale factors and per-channel adjustments, to ensure consistency across inputs.

Parameter
Type
Mandatory
Default
Models

InputScaleEn

bool

No

false

Image

InputScaleCoeff

double

No

1./255.

Image

InputNormMean

float array

No

[]

Image

InputNormStd

float array

No

[]

Image

  • InputScaleEn Specifies whether global data normalization is applied.

  • InputScaleCoeff The scale factor used for global data normalization when InputScaleEn is true.

  • InputNormMean The mean values for per-channel normalization of image inputs (e.g., [0.485, 0.456, 0.406]).

  • InputNormStd The standard deviation values for per-channel normalization of image inputs (e.g., [0.229, 0.224, 0.225]).


Quantization

Settings for converting input data into quantized formats to optimize processing efficiency and model performance.

Parameter
Type
Mandatory
Default
Models

InputQuantEn

bool

No

false

All

InputQuantOffset

float

No

0

All

InputQuantScale

float

No

1

All

  • InputQuantEn Enables input quantization for image and raw tensor types, determining whether the model input is treated as uint8 or float32.

  • InputQuantOffset The quantization zero offset for image and raw tensor inputs.

  • InputQuantScale The quantization scale. When quantization is enabled, data is scaled before quantization using the provided formula.


Postprocessing Parameters

Section name in JSON: POST_PROCESS

These parameters transform model outputs into final, interpretable results.

General Behavior

General settings for the output postprocessing algorithm and how output tensors are managed.

Parameter
Type
Mandatory
Default

PythonFile

string

No

(none)

LabelsPath

string

No

""

OutputPostprocessType

string

No

None

  • OutputPostprocessType The type of output postprocessing algorithm. This can be set to:

    • Classification

    • Detection

    • DetectionYolo

    • DetectionYoloPlates

    • DetectionYoloV8

    • FaceDetection

    • HandDetection

    • PoseDetection

    • PoseDetectionYoloV8

    • Segmentation

    • SegmentationYoloV8

    • None

  • PythonFile The name of a Python file that contains server-side postprocessing code.

  • LabelsPath The path to a label dictionary file.

  • OutputPostprocessType The type of output post-processing algorithm.


Thresholds & Alignment

Thresholds and alignment adjustments used during postprocessing to filter and refine model outputs.

Parameter
Type
Mandatory
Default

OutputConfThreshold

double

No

0.1

OutputNMSThreshold

double

No

0.6

OutputClassIDAdjustment

int

No

0

  • OutputConfThreshold The confidence threshold below which results are filtered out.

  • OutputNMSThreshold The threshold for the Non-Max Suppression (NMS) algorithm.

  • OutputClassIDAdjustment The adjustment for the index of the first non-background class.


Classification-Specific

Parameters tailored for classification tasks, such as enabling softmax and selecting the number of top classes.

Parameter
Type
Mandatory
Default

OutputSoftmaxEn

bool

No

false

OutputTopK

size_t

No

0

  • OutputSoftmaxEn Specifies whether softmax is enabled during post-processing.

  • OutputTopK The number of classes to include in the classification result. If set to zero, all classes above OutputConfThreshold are reported.


Detection-Specific

Parameters tailored for object detection, including detection limits, scaling coefficients, and non-max suppression parameters.

Parameter
Type
Mandatory
Default

XScale

double

conditional

1

YScale

double

conditional

1

HScale

double

conditional

1

WScale

double

conditional

1

OutputNumClasses

int

No

(none)

MaxDetectionsPerClass

int

No

100

MaxClassesPerDetection

int

No

30

UseRegularNMS

bool

No

true

MaxDetections

int

No

20

PoseThreshold

double

No

0.8

NMSRadius

double

No

10

Stride

int

No

16

  • XScale The X scale coefficient used to convert box center coordinates to an anchor-based coordinate system.

  • YScale The Y scale coefficient used to convert box center coordinates to an anchor-based coordinate system.

  • HScale The height scale coefficient used to convert box size coordinates to an anchor-based coordinate system.

  • WScale The width scale coefficient used to convert box size coordinates to an anchor-based coordinate system.

  • OutputNumClasses The number of output classes for detection models.

  • MaxDetectionsPerClass The maximum number of object detection results to report per class.

  • MaxClassesPerDetection The maximum number of classes to report for each detection.

  • UseRegularNMS Specifies whether to use a regular (non-batched) NMS algorithm for object detection.

  • MaxDetections The maximum number of object detection results to report.

  • PoseThreshold The pose score threshold below which low-confidence poses are filtered out.

  • NMSRadius The NMS radius for pose detection—a keypoint candidate is rejected if it lies within this pixel range of a previously detected instance.

  • Stride The stride scale coefficient used for pose detection.

: Basic information to identify the model.

: Environment the model expects.

: Settings for how the model operates.

: Preprocessing settings for model inputs.

: Postprocessing settings for model outputs.

This section includes three parameters: DeviceType, RuntimeAgent, and SupportedDeviceTypes. Refer to the section for details on device compatibility.

General
DEVICE
MODEL_PARAMETERS
PRE_PROCESS
POST_PROCESS
Supported Hardware
JSON Configuration File Parameters