Model Module
degirum.model.Model
Bases: ABC
Model class. Handles whole inference lifecycle for a single model: input data preprocessing, inference, and postprocessing.
Note
You never construct model objects yourself -- instead you call degirum.zoo_manager.ZooManager.load_model method to create degirum.model.Model instances for you.
degirum.model.Model.custom_postprocessor: Optional[type]
property
writable
Custom postprocessor class. When not None, the object of this class is returned as inference result. Such custom postprocessor classes must be inherited from degirum.postprocessor.InferenceResults class.
degirum.model.Model.device_type: str
property
writable
The type of the device to be used for model inference in a format <runtime>/<device>
Setter accepts either a string which specifies single device in a format <runtime>/<device>
or it can be a list of such strings. In this case the first supported device type
from the list will be selected.
Supported device types can be obtained by degirum.model.Model.supported_device_types property.
Getter returns currently selected device type.
degirum.model.Model.devices_available: list
abstractmethod
property
The list of inference device indices which can be used for model inference.
degirum.model.Model.devices_selected: list
property
writable
The list of inference device indices selected for model inference.
degirum.model.Model.eager_batch_size: int
property
writable
The size of the batch (number of consecutive frames before this model is switched to another model during batch predict) to be used by device scheduler when inferencing this model.
degirum.model.Model.frame_queue_depth: int
property
writable
The depth of the model prediction queue. When the queue size reaches this value, the next prediction call will block until there will be space in the queue.
degirum.model.Model.image_backend: str
property
writable
Graphical library (backend) to use for graphical tasks -- one of 'pil'
, 'opencv'
, 'auto'
'auto'
means try OpenCV first, if not installed, try PIL.
degirum.model.Model.inference_timeout_s: float
property
writable
The maximum time in seconds to wait for inference result from the model.
degirum.model.Model.input_crop_percentage: float
property
writable
Percentage of image to crop around. Valid range: [0..1]
.
degirum.model.Model.input_image_format: str
property
writable
Defines the image format for model inputs of image type -- one of 'JPEG'
or 'RAW'
.
degirum.model.Model.input_letterbox_fill_color: tuple
property
writable
Image fill color in case of 'letterbox'
padding (see degirum.model.Model.input_pad_method
property for details).
3-element RGB tuple.
degirum.model.Model.input_numpy_colorspace: str
property
writable
Input image colorspace -- one of 'RGB'
, 'BGR'
, or 'auto'
.
This parameter is used only to identify colorspace for NumPy arrays.
'auto'
translates to 'BGR'
for opencv
backend, and to 'RGB'
for pil
backend.
degirum.model.Model.input_pad_method: str
property
writable
Input image pad method -- one of 'stretch'
, 'letterbox'
, 'crop-first'
, or 'crop-last'
.
- In case of
'stretch'
, the input image will be resized to the model input size without preserving aspect ratio. - In case of
'letterbox'
, the input image will be resized to the model input size preserving aspect ratio. - In case of
'crop-first'
, the input image will be cropped to input_crop_percentage around the center and then resized. - In the case of 'crop-last', if the model's input dimensions are square, the image is resized with its smaller side matching the model dimension, preserving the aspect ratio. If the dimensions are rectangular, the image is resized and stretched to fit the model's input dimensions. After resizing, the image is cropped to the model's input dimensions and aspect ratio based on the 'input_crop_percentage' property.
The voids will be filled with solid color specified by input_letterbox_fill_color
property.
In all cases degirum.model.Model.input_resize_method property specifies the algorithm for resizing.
degirum.model.Model.input_resize_method: str
property
writable
Input image resize method -- one of 'nearest'
, 'bilinear'
, 'area'
, 'bicubic'
, or 'lanczos'
.
degirum.model.Model.input_shape: List[List[int]]
property
writable
Input tensor shapes. List of tensor shapes per input.
Each element of that list is another list containing tensor dimensions, slowest dimension first:
- if InputShape model parameter is specified, its value is used.
- otherwise, InputN/H/W/C model parameters are used as [InputN, InputH, InputW, InputC] list.
degirum.model.Model.label_dictionary: Dict[int, str]
abstractmethod
property
Get model class label dictionary.
Each dictionary element is key-value pair, where the key is the class ID and the value is the class label string.
degirum.model.Model.measure_time: bool
property
writable
Flag to enable measuring and collecting inference time statistics.
Call degirum.model.Model.time_stats to query accumulated inference time statistics.
degirum.model.Model.model_info
property
Return model information object to provide read-only access to model parameters.
New deep copy is created each time.
degirum.model.Model.non_blocking_batch_predict
property
writable
Flag to control the behavior of the generator object returned by predict_batch()
method.
-
When the flag is set to
True
, the generator acceptsNone
from the inference input data iterator object (passed asdata
parameter): IfNone
is returned, the model predict step is skipped for this iteration. Also, when no inference results are available in the result queue at this iteration, the generator yieldsNone
result. -
When the flag is set to
False
(default value), the generator does not allowNone
to be returned from the inference input data iterator object: IfNone
is returned, an exception is raised. Also, when no inference results are available in the result queue at this iteration, the generator continues to the next iteration of the input data iterator. -
Setting this flag to
True
allows usingpredict_batch()
generator in a non-blocking manner, assuming the design of input data iterator object is also non-blocking, i.e., returningNone
when no data is available instead of waiting for the data. Every next element request from the generator will not block the execution waiting for either input data or inference results, returningNone
when no results are available.
degirum.model.Model.output_class_set: set
property
writable
Labels filter: list of class labels/category IDs to be included in inference results.
Note
You can use degirum.model.Model.label_dictionary property to obtain a list of model classes.
degirum.model.Model.output_confidence_threshold: float
property
writable
Confidence threshold used in inference result post-processing.
Valid range: [0..1]
.
Only objects with scores higher than this threshold are reported.
Note
For classification models if degirum.model.Model.output_top_k parameter is set to non-zero value, then it supersedes this threshold -- degirum.model.Model.output_top_k highest score classes are always reported.
degirum.model.Model.output_max_classes_per_detection: int
property
writable
Max Detections Per Class number used in inference result post-processing, and specifies the maximum number of highest probability classes per anchor to be processed during the non-max suppression process for fast algorithm.
Applicable only for detection models.
degirum.model.Model.output_max_detections: int
property
writable
Max Detection number used in inference result post-processing, and specifies the total maximum objects of number to be detected.
Applicable only for detection models.
degirum.model.Model.output_max_detections_per_class: int
property
writable
Max Detections Per Class number used in inference result post-processing, and specifies the maximum number of objects to keep during per class non-max suppression process for regular algorithm.
Applicable only for detection models.
degirum.model.Model.output_nms_threshold: float
property
writable
Non-Max Suppression (NMS) threshold used in inference result post-processing.
Valid range: [0..1]
.
Applicable only for models which utilize NMS algorithm.
degirum.model.Model.output_pose_threshold: float
property
writable
Pose detection threshold used in inference result post-processing.
Valid range: [0..1]
.
Applicable only for pose detection models.
degirum.model.Model.output_postprocess_type: str
property
writable
Inference result post-processing type.
You may set it to 'None'
to bypass post-processing.
degirum.model.Model.output_top_k: float
property
writable
The number of classes with highest scores to report for classification models.
When set to 0
, then report all classes with scores greater than degirum.model.Model.output_confidence_threshold.
degirum.model.Model.output_use_regular_nms: bool
property
writable
Use Regular NMS value used in inference result post-processing and specifies the algorithm to use for detection postprocessing.
If value is True
, regular Non-Max suppression algorithm is used -- NMS is calculated for each class
separately and after that all results are merged.
If value is False
, fast Non-Max suppression algorithm is used -- NMS is calculated for all classes
simultaneously.
degirum.model.Model.overlay_alpha: Union[float, str]
property
writable
Alpha-blend weight for inference results drawing on overlay image.
float
number in range [0..1]
.
See degirum.postprocessor.InferenceResults.image_overlay for more details.
degirum.model.Model.overlay_color
property
writable
Color for inference results drawing on overlay image.
3-element RGB tuple or list of 3-element RGB tuples.
The overlay_color
property is used to define the color to draw overlay details. In the case of a single RGB tuple,
the corresponding color is used to draw all the overlay data: points, boxes, labels, segments, etc.
In the case of a list of RGB tuples the behavior depends on the model type:
- For classification models different colors from the list are used to draw labels of different classes.
- For detection models different colors are used to draw labels and boxes of different classes.
- For pose detection models different colors are used to draw keypoints of different persons.
- For segmentation models different colors are used to highlight segments of different classes.
If the list size is less than the number of classes of the model, then overlay_color
values are used cyclically,
for example, for three-element list it will be overlay_color[0]
, then overlay_color[1]
, overlay_color[2]
,
and again overlay_color[0]
.
The default value of overlay_color
is a single RBG tuple of yellow color for all model types except segmentation models.
For segmentation models it is the list of RGB tuples with the list size equal to the number of model classes.
You can use degirum.model.Model.label_dictionary property to obtain a list of model classes.
Each color is automatically assigned to look pretty and different from other colors in the list.
degirum.model.Model.overlay_font_scale: float
property
writable
Font scale for inference results drawing on overlay image.
float
positive number.
See degirum.postprocessor.InferenceResults.image_overlay for more details.
degirum.model.Model.overlay_line_width: int
property
writable
Line width for inference results drawing on overlay image.
See degirum.postprocessor.InferenceResults.image_overlay for more details.
degirum.model.Model.overlay_show_labels: bool
property
writable
Flag to enable/disable drawing class labels on overlay image.
See degirum.postprocessor.InferenceResults.image_overlay for more details.
degirum.model.Model.overlay_show_probabilities: bool
property
writable
Flag to enable/disable drawing class probabilities on overlay image.
See degirum.postprocessor.InferenceResults.image_overlay for more details.
degirum.model.Model.save_model_image: bool
property
writable
Flag to enable/disable saving of model input image in inference results.
Model input image is the image converted to AI model input specifications as raw binary array.
degirum.model.Model.supported_device_types: List[str]
property
The list of supported device types in format <runtime>/<device>
for this model.
degirum.model.Model.__call__(data)
Perform whole inference lifecycle: input data preprocessing, inference and postprocessing.
Same as degirum.model.Model.predict.
degirum.model.Model.__enter__()
Context manager enter handler.
degirum.model.Model.__exit__(exc_type, exc_val, exc_tb)
Context manager exit handler.
degirum.model.Model.__init__(model_name, model_params, supported_device_types)
Constructor.
Note
You never construct model objects yourself -- instead you call degirum.zoo_manager.ZooManager.load_model method to create degirum.model.Model instances for you.
degirum.model.Model.predict(data)
Perform whole inference lifecycle: input data preprocessing, inference, and postprocessing.
Args:
data (any): Inference input data. Input data type depends on the model.
- If the model expects image data, then the input data is either:
- Input image path string.
- NumPy 3D array of pixels in a form HWC.
where color dimension is native to selected graphical backend (RGB for `'pil'` and BGR for `'opencv'` backend)
- `PIL.Image` object (only for `'pil'` backend).
- If the model expects audio data, then the input data is NumPy 1D array with audio data samples.
- If the model expects raw tensor data, then the input data is NumPy multidimensional array with shape matching model input.
- In case of multi-input model a list of elements of the supported data type is expected.
Returns:
Type | Description |
---|---|
InferenceResults
|
Inference result object, which allows you to access inference results as a dictionary or as an overlay image if it is supported by the model. For your convenience, all image coordinates in case of detection models are converted from model coordinates to original image coordinates. |
degirum.model.Model.predict_batch(data)
Perform whole inference lifecycle for all objects in given iterator object (for example, list
).
Such iterator object should return the same object types which regular degirum.model.Model.predict method accepts.
Args:
data (iterator): Inference input data iterator object such as list or generator function.
Each element returned by this iterator can be one of the following:
- A single input data object, in case of single-input model.
- A `list` of input data objects, in case of multi-input model.
- A `tuple` containing a pair of input data object or a `list` of input data objects as a first element
and frame info object as a second element of the `tuple`.
The input data object type depends on the model.
- If the model expects image data, then the input data object is either:
- Input image path string.
- NumPy 3D array of pixels in a form HWC, where color dimension is native to selected graphical backend
(RGB for `'pil'` and BGR for `'opencv'` backend).
- `PIL.Image` object (only for `'pil'` backend).
- If the model expects audio data, then the input data object is NumPy 1D array with audio data samples.
- If the model expects raw tensor data, then the input data object is NumPy multidimensional array with shape
matching model input.
The frame info object is passed to the inference result object unchanged and can be accessed via `info`
property of the inference result object.
Returns:
Type | Description |
---|---|
Iterator[InferenceResults]
|
Generator object which iterates over inference result objects. This allows you directly using the
result of degirum.model.Model.predict_batch in |
Example:
```python
for result in model.predict_batch(['image1.jpg','image2.jpg']):
print(result)
```
degirum.model.Model.predict_dir(path, *, recursive=False, extensions=['.jpg', '.jpeg', '.png', '.bmp'])
Perform whole inference lifecycle for all files from specified directory matching given file extensions.
Supports only single-input models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
str
|
Directory name containing files to be processed. |
required |
recursive |
bool
|
True to recursively walk through all subdirectories in a directory. Default is |
False
|
extensions |
list[str]
|
Single string or list of strings containing file extension(s) to process. |
['.jpg', '.jpeg', '.png', '.bmp']
|
Returns:
Type | Description |
---|---|
Iterator[InferenceResults]
|
Generator object to iterate over inference result objects. This allows you directly using the
result of degirum.model.Model.predict_dir in |
Example:
```python
for result in model.predict_dir('./some_path'):
print(result)
```
degirum.model.Model.reset_time_stats()
Reset inference time statistics.
degirum.model.Model.time_stats method will return empty dictionary after this call.
degirum.model.Model.time_stats()
Query inference time statistics.
Returns:
Type | Description |
---|---|
dict
|
Dictionary containing time statistic objects.
|