# Compound Models

{% hint style="info" %}
This API Reference is based on DeGirum Tools version 1.2.0.
{% endhint %}

## Compound Models Module Overview <a href="#compound-models-module-overview" id="compound-models-module-overview"></a>

This module provides a toolkit for creating compound models using the DeGirum PySDK.

A compound model orchestrates multiple underlying models into a pipeline to enable complex inference scenarios. Common examples include:

* Detecting objects and then classifying each detected object.
* Running coarse detection first, then applying a refined detection model on specific regions.
* Combining outputs from multiple independent models into a unified inference result.

Compound models run in a single thread and are intended primarily for simple usage scenarios. Compound models still provide efficient batch prediction pipelining using batch\_predict() in non-blocking mode. For more performant applications requiring better scalability and more flexible connections, we recommend using Gizmos, which in multiple threads.

### Key Concepts <a href="#key-concepts" id="key-concepts"></a>

* **Model Composition**: Compound models sequentially (or concurrently) invoke multiple models. Typically, results from the first model (e.g., bounding boxes from detection) feed into subsequent models (classification or refined detection).
* **Pipeline Workflow**: A typical workflow involves:
  1. Run `model1` to identify regions of interest (ROIs).
  2. Crop these ROIs and run them through `model2`.
  3. Integrate or transform outputs from `model2` back into the original context.
* **Unified Model Interface**: All compound models follow the same interface as regular models in DeGirum SDK, providing `.predict()` for single frames and `.predict_batch()` for iterators of frames.

### Included Compound Models <a href="#included-compound-models" id="included-compound-models"></a>

* **CombiningCompoundModel**: Combines results from two models run concurrently on the same input.
* **CroppingCompoundModel**: Crops regions identified by `model1` and feeds them into `model2`.
* **CroppingAndClassifyingCompoundModel**: Specialized pipeline: object detection (`model1`) followed by classification (`model2`) of each detected object.
* **CroppingAndDetectingCompoundModel**: Pipeline for refined detection: initial coarse detection (`model1`) followed by detailed detection (`model2`) within each ROI.
* **RegionExtractionPseudoModel**: Extracts predefined regions of interest without actual inference, optionally filtering by motion detection.

### Basic Usage Examples <a href="#basic-usage-examples" id="basic-usage-examples"></a>

**Detection + Classification**:

{% code overflow="wrap" %}

```python
from degirum_tools import ModelSpec, remote_assets
from degirum_tools.compound_models import CroppingAndClassifyingCompoundModel

# Describe the individual models once
detector_spec = ModelSpec(
    model_name="<your_detection_model>",
    inference_host_address="@cloud",  # Can be '@cloud', host:port, or '@local'
    zoo_url="degirum/degirum",
)

classifier_spec = ModelSpec(
    model_name="<your_classification_model>",
    inference_host_address="@cloud",
    zoo_url="degirum/degirum",
)

with detector_spec.load_model() as detector, classifier_spec.load_model() as classifier:
    # Creating a compound model pipeline
    compound_model = CroppingAndClassifyingCompoundModel(detector, classifier)

    # Single frame inference using predict()
    print("Using predict():")
    single_result = compound_model(remote_assets.cat)
    print(single_result)

    # Batch inference using predict_batch()
    print("Using predict_batch():")
    for batch_result in compound_model.predict_batch(
        [remote_assets.cat, remote_assets.two_cats]
    ):
        print(batch_result)
```

{% endcode %}

**Detection + Detection**:

{% code overflow="wrap" %}

```python
from degirum_tools import ModelSpec, remote_assets
from degirum_tools.compound_models import CombiningCompoundModel

# Describe the detectors up front
detector1_spec = ModelSpec(
    model_name="<your_first_detection_model>",
    inference_host_address="@cloud",  # Can be '@cloud', host:port, or '@local'
    zoo_url="degirum/degirum",
)

detector2_spec = ModelSpec(
    model_name="<your_second_detection_model>",
    inference_host_address="@cloud",
    zoo_url="degirum/degirum",
)

with detector1_spec.load_model() as detector1, detector2_spec.load_model() as detector2:
    # Creating a compound model that merges results from both detectors
    compound_detector = CombiningCompoundModel(detector1, detector2)

    # Single frame inference using predict()
    print("Using predict():")
    single_result = compound_detector(remote_assets.cat)
    print(single_result.results)

    # Batch inference using predict_batch()
    print("Using predict_batch():")
    for batch_result in compound_detector.predict_batch(
        [remote_assets.cat, remote_assets.two_cats]
    ):
        print(batch_result.results)
```

{% endcode %}

See class-level documentation below for details on individual classes and additional configuration options.

## Classes <a href="#classes" id="classes"></a>

## ModelLike <a href="#modellike" id="modellike"></a>

`ModelLike`

Bases: `ABC`

A base class which provides a common interface for all models, similar to PySDK model class.

When calling `predict_batch(data)`, each item in `data` can be:

* A single frame (image/array/etc.), or
* A 2-element tuple in the form `(frame, frame_info)`.

The `frame_info` object (of any type) then appears in the final `InferenceResults.info` attribute, allowing you to carry custom metadata through the pipeline.

### ModelLike Methods <a href="#modellike-methods" id="modellike-methods"></a>

#### \_\_call\_\_(data) <a href="#call" id="call"></a>

`__call__(data)`

Perform a whole inference lifecycle on a single frame (callable alias to `predict()`).

Parameters:

| Name   | Type  | Description                                                                          | Default    |
| ------ | ----- | ------------------------------------------------------------------------------------ | ---------- |
| `data` | `any` | Inference input data, typically an image or array, or a tuple `(frame, frame_info)`. | *required* |

Returns:

| Type                       | Description                                                 |
| -------------------------- | ----------------------------------------------------------- |
| `InferenceResults or None` | The combined inference result object, or None if no result. |

#### predict(data) <a href="#predict" id="predict"></a>

`predict(data)`

Perform a whole inference lifecycle on a single frame.

Parameters:

| Name   | Type  | Description                                                                          | Default    |
| ------ | ----- | ------------------------------------------------------------------------------------ | ---------- |
| `data` | `any` | Inference input data, typically an image or array, or a tuple `(frame, frame_info)`. | *required* |

Returns:

| Type                       | Description                                                 |
| -------------------------- | ----------------------------------------------------------- |
| `InferenceResults or None` | The combined inference result object, or None if no result. |

#### predict\_batch(data) <a href="#predict_batch" id="predict_batch"></a>

`predict_batch(data)`

`abstractmethod`

Perform a whole inference lifecycle for all objects in the given iterator object (for example, `list`).

Each item in `data` can be a single frame (any type acceptable to the model) or a 2-element tuple `(frame, frame_info)`. In the latter case, `frame_info` is carried through and placed in `InferenceResults.info` for that frame.

Parameters:

| Name   | Type       | Description                                                                                                                                                                     | Default    |
| ------ | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- |
| `data` | `iterator` | Inference input data iterator object such as a list or a generator function. Each element returned by this iterator should be compatible with what regular PySDK models accept. | *required* |

Returns:

| Type                                 | Description                                                                                                                                 |
| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------- |
| `Iterator[InferenceResults or None]` | A generator or iterator over the inference result objects (or None in non-blocking mode). This allows you to use the result in `for` loops. |

## FrameInfo <a href="#frameinfo" id="frameinfo"></a>

`FrameInfo`

Class to hold frame info.

By default, DeGirum PySDK allows you to pass any arbitrary object as 'frame info' alongside each frame in `predict_batch()`.

Attributes:

| Name         | Type  | Description                                                                                                                                                                                                                     |
| ------------ | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `result1`    | `any` | The result object produced by the first model in a compound pipeline. For instance, an [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) object. |
| `sub_result` | `int` | The index of a sub-result within `result1` (e.g., which bounding box led to this cropped image).                                                                                                                                |

## CompoundModelBase <a href="#compoundmodelbase" id="compoundmodelbase"></a>

`CompoundModelBase`

Bases: `ModelLike`

Compound model class which combines two models into one pipeline.

One model is considered *primary* (model1), and the other is *nested* (model2).

The primary model (`model1`) processes the input frames. Its results are then passed to the nested model (`model2`).

### Attributes <a href="#attributes" id="attributes"></a>

#### non\_blocking\_batch\_predict <a href="#non_blocking_batch_predict" id="non_blocking_batch_predict"></a>

`non_blocking_batch_predict`

`property` `writable`

Flag controlling whether `predict_batch()` operates in non-blocking mode for model1. In non-blocking mode, `predict_batch()` can yield `None` when no results are immediately available.

Returns:

| Type   | Description                                            |
| ------ | ------------------------------------------------------ |
| `bool` | True if non-blocking mode is enabled, False otherwise. |

### CompoundModelBase Methods <a href="#compoundmodelbase-methods" id="compoundmodelbase-methods"></a>

#### \_\_getattr\_\_(attr) <a href="#getattr" id="getattr"></a>

`__getattr__(attr)`

Fallback for getters of model-like attributes to the primary model (model1).

#### \_\_init\_\_(model1, ...) <a href="#init" id="init"></a>

`__init__(model1, model2)`

Constructor.

Parameters:

| Name     | Type        | Description                                           | Default    |
| -------- | ----------- | ----------------------------------------------------- | ---------- |
| `model1` | `ModelLike` | Model to be used for the first step of the pipeline.  | *required* |
| `model2` | `ModelLike` | Model to be used for the second step of the pipeline. | *required* |

#### \_\_setattr\_\_(key, ...) <a href="#setattr" id="setattr"></a>

`__setattr__(key, value)`

Intercepts attempts to set attributes. If the attribute already exists on the instance, the class, or is being set inside `__init__`, the attribute is set normally. Otherwise, the attribute assignment is delegated to the primary model (`model1`) if defined. This prevents adding new attributes outside of `__init__`.

#### attach\_analyzers(analyzers) <a href="#attach_analyzers" id="attach_analyzers"></a>

`attach_analyzers(analyzers)`

Attach analyzers to a model.

Parameters:

| Name        | Type                                                        | Description                                                                          | Default    |
| ----------- | ----------------------------------------------------------- | ------------------------------------------------------------------------------------ | ---------- |
| `analyzers` | `Union[ResultAnalyzerBase, list[ResultAnalyzerBase], None]` | A single analyzer, or a list of analyzer objects, or `None` to detach all analyzers. | *required* |

#### predict\_batch(data) <a href="#predict_batch" id="predict_batch"></a>

`predict_batch(data)`

Perform a whole inference lifecycle for all objects in the given iterator object (for example, `list`).

Works in a pipeline fashion

1. Pass input frames (or `(frame, frame_info)` tuples) to `model1`.
2. Use `queue_result1(result1)` to feed `model2`.
3. Collect `model2` results, transform them with `transform_result2(result2)`,
4. Yield the final output.

Parameters:

| Name   | Type       | Description                                                                                                                                                | Default    |
| ------ | ---------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- |
| `data` | `iterator` | Inference input data iterator object such as a list or a generator function. Each element returned should be compatible with model inference requirements. | *required* |

Returns:

| Type                                 | Description                                                                                                                                                  |
| ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `Iterator[InferenceResults or None]` | Generator object which iterates over the combined inference result objects (or None in non-blocking mode). This allows you to use the result in `for` loops. |

#### queue\_result1(result1) <a href="#queue_result1" id="queue_result1"></a>

`queue_result1(result1)`

`abstractmethod`

Process the result of the first model and put it into the queue.

Parameters:

| Name      | Type                                                                                                                             | Description                           | Default    |
| --------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------- | ---------- |
| `result1` | [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) | Prediction result of the first model. | *required* |

#### transform\_result2(result2) <a href="#transform_result2" id="transform_result2"></a>

`transform_result2(result2)`

`abstractmethod`

Transform (or integrate) the result of the second model.

Parameters:

| Name      | Type                                                                                                                             | Description                            | Default    |
| --------- | -------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------- | ---------- |
| `result2` | [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) | Prediction result of the second model. | *required* |

Returns:

| Type                       | Description                                                                                                                    |
| -------------------------- | ------------------------------------------------------------------------------------------------------------------------------ |
| `InferenceResults or None` | Transformed/combined result to be returned by the compound model. If None, that means no result is produced at this iteration. |

## Nested Classes <a href="#nested-classes" id="nested-classes"></a>

### NonBlockingQueue <a href="#nonblockingqueue" id="nonblockingqueue"></a>

`NonBlockingQueue`

Bases: `Queue`

Specialized non-blocking queue which acts as an iterator to feed data to the nested model.

### NonBlockingQueue Methods <a href="#nonblockingqueue-methods" id="nonblockingqueue-methods"></a>

#### \_\_iter\_\_ <a href="#iter" id="iter"></a>

`__iter__()`

Yield items from the queue until a `None` sentinel is reached.

Yields:

| Type          | Description                                               |
| ------------- | --------------------------------------------------------- |
| `any or None` | The item from the queue, or `None` if the queue is empty. |

## CombiningCompoundModel <a href="#combiningcompoundmodel" id="combiningcompoundmodel"></a>

`CombiningCompoundModel`

Bases: `CompoundModelBase`

Compound model class which executes two models in parallel on the same input data and merges their results.

Restriction: both models should produce the same type of inference results (e.g., both detection).

### CombiningCompoundModel Methods <a href="#combiningcompoundmodel-methods" id="combiningcompoundmodel-methods"></a>

#### queue\_result1(result1) <a href="#queue_result1" id="queue_result1"></a>

`queue_result1(result1)`

Queues the original image from `result1` and a new `FrameInfo` instance that references `result1`. This `(frame, frame_info)` tuple is then read by `model2`.

Parameters:

| Name      | Type                                                                                                                             | Description                                                                                                                                           | Default    |
| --------- | -------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- |
| `result1` | [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) | Inference result from model1. We extract `result1.image` as the frame, and create a `FrameInfo` so we know which `result1` this frame corresponds to. | *required* |

#### transform\_result2(result2) <a href="#transform_result2" id="transform_result2"></a>

`transform_result2(result2)`

Merges results from `model2` into `result1` that was stored in `FrameInfo`.

This implementation appends the second model's inference results to the first model's result list.

Parameters:

| Name      | Type                                                                                                                             | Description                                                                                  | Default    |
| --------- | -------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- | ---------- |
| `result2` | [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) | Inference result of the second model, which has `info` attribute containing the `FrameInfo`. | *required* |

Returns:

| Type                                                                                                                             | Description                                     |
| -------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------- |
| [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) | The merged inference results (model1 + model2). |

## CroppingCompoundModel <a href="#croppingcompoundmodel" id="croppingcompoundmodel"></a>

`CroppingCompoundModel`

Bases: `CompoundModelBase`

Compound model class which crops the original image according to results of the first model and then passes these cropped images to the second model.

Restriction: the first model should be of object detection type.

### CroppingCompoundModel Methods <a href="#croppingcompoundmodel-methods" id="croppingcompoundmodel-methods"></a>

#### \_\_init\_\_(model1, ...) <a href="#init" id="init"></a>

`__init__(model1, model2, crop_extent=0.0, crop_extent_option=CropExtentOptions.ASPECT_RATIO_NO_ADJUSTMENT)`

Constructor.

Parameters:

| Name                 | Type                | Description                                                      | Default                      |
| -------------------- | ------------------- | ---------------------------------------------------------------- | ---------------------------- |
| `model1`             | `ModelLike`         | Object detection model that produces bounding boxes.             | *required*                   |
| `model2`             | `ModelLike`         | Classification model that will process each cropped region.      | *required*                   |
| `crop_extent`        | `float`             | Extent of cropping (in percent of bbox size) to expand the bbox. | `0.0`                        |
| `crop_extent_option` | `CropExtentOptions` | Method of applying extended crop to the input image for model2.  | `ASPECT_RATIO_NO_ADJUSTMENT` |

#### queue\_result1(result1) <a href="#queue_result1" id="queue_result1"></a>

`queue_result1(result1)`

Put the original image into the queue, along with bounding boxes from the first model.

If no bounding boxes are detected, puts a small black image to keep the pipeline in sync.

Parameters:

| Name      | Type                                                                                                                             | Description                                              | Default    |
| --------- | -------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------- | ---------- |
| `result1` | [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) | Prediction result of the first (object detection) model. | *required* |

## CroppingAndClassifyingCompoundModel <a href="#croppingandclassifyingcompoundmodel" id="croppingandclassifyingcompoundmodel"></a>

`CroppingAndClassifyingCompoundModel`

Bases: `CroppingCompoundModel`

Compound model class which

1. Runs an object detection (model1) to generate bounding boxes.
2. Crops each bounding box from the original image.
3. Runs a classification (model2) on each cropped image.
4. Patches the original detection results with the classification labels.

Restriction: first model must be object detection, second model must be classification.

### CroppingAndClassifyingCompoundModel Methods <a href="#croppingandclassifyingcompoundmodel-methods" id="croppingandclassifyingcompoundmodel-methods"></a>

#### \_\_init\_\_(model1, ...) <a href="#init" id="init"></a>

`__init__(model1, model2, crop_extent=0.0, crop_extent_option=CropExtentOptions.ASPECT_RATIO_NO_ADJUSTMENT)`

Constructor.

Parameters:

| Name                 | Type                | Description                                               | Default                      |
| -------------------- | ------------------- | --------------------------------------------------------- | ---------------------------- |
| `model1`             | `ModelLike`         | An object detection model producing bounding boxes.       | *required*                   |
| `model2`             | `ModelLike`         | A classification model to classify each cropped region.   | *required*                   |
| `crop_extent`        | `float`             | Extent of cropping (in percent of bbox size).             | `0.0`                        |
| `crop_extent_option` | `CropExtentOptions` | Specifies how to adjust the bounding box before cropping. | `ASPECT_RATIO_NO_ADJUSTMENT` |

#### predict\_batch(data) <a href="#predict_batch" id="predict_batch"></a>

`predict_batch(data)`

Perform the full inference lifecycle for all objects in the given iterator (for example, `list`), but patch model1 bounding box labels with classification results from model2.

Parameters:

| Name   | Type       | Description                                                                                                                 | Default    |
| ------ | ---------- | --------------------------------------------------------------------------------------------------------------------------- | ---------- |
| `data` | `iterator` | Iterator of input frames for model1. Each element returned by this iterator should be compatible with regular PySDK models. | *required* |

Returns:

| Type                         | Description                                                                                 |
| ---------------------------- | ------------------------------------------------------------------------------------------- |
| `Iterator[InferenceResults]` | Yields the detection results with patched classification labels after each frame completes. |

#### transform\_result2(result2) <a href="#transform_result2" id="transform_result2"></a>

`transform_result2(result2)`

Transform (patch) the classification result into the original detection results.

Parameters:

| Name      | Type                                                                                                                             | Description                      | Default    |
| --------- | -------------------------------------------------------------------------------------------------------------------------------- | -------------------------------- | ---------- |
| `result2` | [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) | Classification result of model2. | *required* |

Returns:

| Type                       | Description                                                                                                            |
| -------------------------- | ---------------------------------------------------------------------------------------------------------------------- |
| `InferenceResults or None` | The detection result (from model1) patched with classification labels, or None if we haven't moved to a new frame yet. |

## CroppingAndDetectingCompoundModel <a href="#croppinganddetectingcompoundmodel" id="croppinganddetectingcompoundmodel"></a>

`CroppingAndDetectingCompoundModel`

Bases: `CroppingCompoundModel`

Compound model class which

1. Uses an object detection model (model1) to generate bounding boxes (ROIs).
2. Crops each bounding box from the original image.
3. Uses another object detection model (model2) to further detect objects in each cropped region.
4. Combines the results of the second model from all cropped regions, mapping coords back to the original image.

Optionally, you can add model1 detections to the final result and/or apply NMS.

When model1 results are added, each detection from model2 will have a `crop_index` field, indicating which bounding box from model1 it corresponds to.

Restriction

First model should be object detection or pseudo-detection model like `RegionExtractionPseudoModel`, second model should be object detection.

### CroppingAndDetectingCompoundModel Methods <a href="#croppinganddetectingcompoundmodel-methods" id="croppinganddetectingcompoundmodel-methods"></a>

#### \_\_init\_\_(model1, ...) <a href="#init" id="init"></a>

`__init__(model1, model2, *, crop_extent=0.0, crop_extent_option=CropExtentOptions.ASPECT_RATIO_NO_ADJUSTMENT, add_model1_results=False, nms_options=None)`

Constructor.

Parameters:

| Name                 | Type                   | Description                                                                                                                                                                               | Default                      |
| -------------------- | ---------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------- |
| `model1`             | `ModelLike`            | Object detection model (or pseudo-detection).                                                                                                                                             | *required*                   |
| `model2`             | `ModelLike`            | Object detection model.                                                                                                                                                                   | *required*                   |
| `crop_extent`        | `float`                | Extent of cropping in percent of bbox size.                                                                                                                                               | `0.0`                        |
| `crop_extent_option` | `CropExtentOptions`    | Method of applying extended crop to the input image for model2.                                                                                                                           | `ASPECT_RATIO_NO_ADJUSTMENT` |
| `add_model1_results` | `bool`                 | If True, merges model1 detections into the final combined result. Each detection from model2 will have a `crop_index` field, indicating which bounding box from model1 it corresponds to. | `False`                      |
| `nms_options`        | `Optional[NmsOptions]` | If provided, applies non-maximum suppression (NMS) to the combined result.                                                                                                                | `None`                       |

#### predict\_batch(data) <a href="#predict_batch" id="predict_batch"></a>

`predict_batch(data)`

Perform the full inference lifecycle for all objects in the given iterator object (for example, `list`):

1. model1 detects or extracts bounding boxes (ROIs).
2. Each ROI is passed to model2 for detection.
3. model2 results for each ROI are merged and mapped back to original coordinates.
4. (Optional) NMS is applied and results from model1 can be included.

Parameters:

| Name   | Type       | Description                                                                                                                 | Default    |
| ------ | ---------- | --------------------------------------------------------------------------------------------------------------------------- | ---------- |
| `data` | `iterator` | Iterator of input frames for model1. Each element returned by this iterator should be compatible with regular PySDK models. | *required* |

Returns:

| Type                         | Description                                                                                                                               |
| ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| `Iterator[InferenceResults]` | Generator object which iterates over final detection results with possibly merged bounding boxes, adjusted to original image coordinates. |

#### transform\_result2(result2) <a href="#transform_result2" id="transform_result2"></a>

`transform_result2(result2)`

Combine detection results from model2 for each bbox from model1, translating coordinates back to the original image space.

Parameters:

| Name      | Type                                                                                                                             | Description                           | Default    |
| --------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------- | ---------- |
| `result2` | [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) | Detection result of the second model. | *required* |

Returns:

| Type                       | Description                                                                                   |
| -------------------------- | --------------------------------------------------------------------------------------------- |
| `InferenceResults or None` | The final detection results for the previous frame if a new frame started, or None otherwise. |

## RegionExtractionPseudoModel <a href="#regionextractionpseudomodel" id="regionextractionpseudomodel"></a>

`RegionExtractionPseudoModel`

Bases: `ModelLike`

Pseudo-model class which extracts regions from a given image according to given ROI boxes.

### Attributes <a href="#attributes" id="attributes"></a>

#### custom\_postprocessor: Optional\[type] <a href="#custom_postprocessor-optionaltype" id="custom_postprocessor-optionaltype"></a>

`custom_postprocessor: Optional[type]`

`property` `writable`

Custom postprocessor class. Required for attaching analyzers to the pseudo-model.

When set, this replaces the default postprocessor with a user-defined postprocessor.

Returns:

| Type             | Description                                               |
| ---------------- | --------------------------------------------------------- |
| `Optional[type]` | The user-defined postprocessor class, or None if not set. |

#### non\_blocking\_batch\_predict <a href="#non_blocking_batch_predict" id="non_blocking_batch_predict"></a>

`non_blocking_batch_predict`

`property` `writable`

Controls non-blocking mode for `predict_batch()`.

Returns:

| Type   | Description                                            |
| ------ | ------------------------------------------------------ |
| `bool` | True if non-blocking mode is enabled; otherwise False. |

### RegionExtractionPseudoModel Methods <a href="#regionextractionpseudomodel-methods" id="regionextractionpseudomodel-methods"></a>

#### \_\_getattr\_\_(attr) <a href="#getattr" id="getattr"></a>

`__getattr__(attr)`

Fallback for getters of model-like attributes to `model2`.

#### \_\_init\_\_(roi\_list, ...) <a href="#init" id="init"></a>

`__init__(roi_list, model2, *, motion_detect=None)`

Constructor.

Parameters:

| Name            | Type                            | Description                                                                                                                                               | Default    |
| --------------- | ------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- |
| `roi_list`      | `Union[list, ndarray]`          | Can be: - list of ROI boxes in `[x1, y1, x2, y2]` format, - 2D NumPy array of shape (N, 4), - 3D NumPy array of shape (K, M, 4), which will be flattened. | *required* |
| `model2`        | `Model`                         | The second model in the pipeline.                                                                                                                         | *required* |
| `motion_detect` | `Optional[MotionDetectOptions]` | \* When None, disabled motion detection. \* When not None, applies motion detection before extracting ROI boxes. Boxes without motion are skipped.        | `None`     |

#### predict\_batch(data) <a href="#predict_batch" id="predict_batch"></a>

`predict_batch(data)`

Perform a pseudo-inference that outputs bounding boxes defined in `roi_list`.

If motion detection is enabled, skip ROIs where motion is not detected.

Parameters:

| Name   | Type       | Description                                                                                                                      | Default    |
| ------ | ---------- | -------------------------------------------------------------------------------------------------------------------------------- | ---------- |
| `data` | `iterator` | Iterator over the input images or frames. Each element returned by this iterator should be compatible with regular PySDK models. | *required* |

Returns:

| Type                                 | Description                                                                                                                       |
| ------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------- |
| `Iterator[InferenceResults or None]` | Yields pseudo-inference results containing ROIs as bounding boxes, or yields None in non-blocking mode when no data is available. |

## NmsOptions <a href="#nmsoptions" id="nmsoptions"></a>

`NmsOptions`

`dataclass`

Options for non-maximum suppression (NMS) algorithm.

Attributes:

| Name             | Type                    | Description                                                             |
| ---------------- | ----------------------- | ----------------------------------------------------------------------- |
| `threshold`      | `float`                 | IoU or IoS threshold for box clustering (range \[0..1]).                |
| `use_iou`        | `bool`                  | If True, use IoU for box clustering, otherwise IoS.                     |
| `box_select`     | `NmsBoxSelectionPolicy` | Box selection policy (e.g., keep the box with the highest probability). |
| `class_agnostic` | `bool`                  | If True, perform class-agnostic NMS.                                    |

## MotionDetectOptions <a href="#motiondetectoptions" id="motiondetectoptions"></a>

`MotionDetectOptions`

`dataclass`

Options for motion detection algorithm.

Attributes:

| Name        | Type    | Description                                                                                             |
| ----------- | ------- | ------------------------------------------------------------------------------------------------------- |
| `threshold` | `float` | Threshold for motion detection \[0..1], representing fraction of changed pixels relative to frame size. |
| `look_back` | `int`   | Number of frames to look back to detect motion.                                                         |
