# Face Tracking Gizmos

{% hint style="info" %}
This API Reference is based on DeGirum Face version 1.4.1.
{% endhint %}

## Functions <a href="#functions" id="functions"></a>

## Classes <a href="#classes" id="classes"></a>

## FaceAnnotator <a href="#faceannotator" id="faceannotator"></a>

`FaceAnnotator`

Bases: `ResultAnalyzerBase`

Object annotating analyzer

### FaceAnnotator Methods <a href="#faceannotator-methods" id="faceannotator-methods"></a>

#### \_\_init\_\_(object\_map, ...) <a href="#init" id="init"></a>

`__init__(object_map, *, label_map={})`

Constructor.

Parameters:

| Name         | Type      | Description                                                        | Default    |
| ------------ | --------- | ------------------------------------------------------------------ | ---------- |
| `object_map` | `FaceMap` | The map of object IDs to attributes.                               | *required* |
| `label_map`  | `dict`    | Map of special labels (FaceStatus.lbl\_\*) to their display names. | `{}`       |

#### analyze(result) <a href="#analyze" id="analyze"></a>

`analyze(result)`

Analyze inference result and update labels based on object map

## FaceCropCombiningGizmo <a href="#facecropcombininggizmo" id="facecropcombininggizmo"></a>

`FaceCropCombiningGizmo`

Bases: `Gizmo`

Gizmo that combines full frames with face crop inference results.

Input 0: Full frames Input 1: Face crops with inference results

### FaceCropCombiningGizmo Methods <a href="#facecropcombininggizmo-methods" id="facecropcombininggizmo-methods"></a>

#### \_\_init\_\_(\*, ...) <a href="#init" id="init"></a>

`__init__(*, face_map=None)`

Constructor.

#### run <a href="#run" id="run"></a>

`run()`

Run gizmo to combine full frames with crop inference results.

## FaceDetectorGizmo <a href="#facedetectorgizmo" id="facedetectorgizmo"></a>

`FaceDetectorGizmo`

Bases: `AiGizmoBase`

Face detector AI inference gizmo which applies all necessary analyzers for face tracking

### FaceDetectorGizmo Methods <a href="#facedetectorgizmo-methods" id="facedetectorgizmo-methods"></a>

#### \_\_init\_\_(model, ...) <a href="#init" id="init"></a>

`__init__(model, *, analyzers, stream_depth=10, allow_drop=False)`

Constructor.

Parameters:

| Name           | Type                       | Description                                                          | Default    |
| -------------- | -------------------------- | -------------------------------------------------------------------- | ---------- |
| `analyzers`    | `List[ResultAnalyzerBase]` | List of analyzers to apply to the inference results.                 | *required* |
| `stream_depth` | `int`                      | Depth of the input stream queue. Defaults to 10.                     | `10`       |
| `allow_drop`   | `bool`                     | If True, allow dropping frames on input overflow. Defaults to False. | `False`    |

#### on\_result(result) <a href="#on_result" id="on_result"></a>

`on_result(result)`

Append the inference result to the input frame's metadata and send it downstream. Adds face\_tracking\_frame\_id attribute to the inference result.

Parameters:

| Name     | Type                                                                                                                             | Description                                 | Default    |
| -------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------- | ---------- |
| `result` | [InferenceResults](https://docs.degirum.com/pysdk/user-guide-pysdk/api-ref/postprocessor#degirum.postprocessor.inferenceresults) | The inference result for the current frame. | *required* |

#### run <a href="#run" id="run"></a>

`run()`

Run the model inference loop.

## FaceEventNotifier <a href="#faceeventnotifier" id="faceeventnotifier"></a>

`FaceEventNotifier`

Bases: `ResultAnalyzerBase`

Analyzer to generate notifications and save video clips in case of face events

### FaceEventNotifier Methods <a href="#faceeventnotifier-methods" id="faceeventnotifier-methods"></a>

#### \_\_init\_\_(\*, ...) <a href="#init" id="init"></a>

`__init__(*, face_map, config, clip_target_fps)`

Constructor.

Parameters:

| Name              | Type                | Description                               | Default    |
| ----------------- | ------------------- | ----------------------------------------- | ---------- |
| `face_map`        | `FaceMap`           | The map of tracked faces.                 | *required* |
| `config`          | `FaceTrackerConfig` | Configuration for face tracking.          | *required* |
| `clip_target_fps` | `float`             | Target frames per second for video clips. | *required* |

#### analyze(result) <a href="#analyze" id="analyze"></a>

`analyze(result)`

Analyze inference result and generate notifications for face events

#### finalize <a href="#finalize" id="finalize"></a>

`finalize()`

Finalize the analyzer and clean up resources

## FaceFilter <a href="#facefilter" id="facefilter"></a>

`FaceFilter`

Bases: `ResultAnalyzerBase`

Analyzer to filter and prepare face detection results for reID processing.

Adds two results for each face

key\_face\_filter\_applied: flag indicating which filter (if any) filtered-out this face from reID processing key\_face\_tracking\_keypoints: face keypoints as numpy arrays

### FaceFilter Methods <a href="#facefilter-methods" id="facefilter-methods"></a>

#### \_\_init\_\_(config, ...) <a href="#init" id="init"></a>

`__init__(config, face_reid_map=None)`

Constructor.

Parameters:

| Name            | Type                | Description                             | Default    |
| --------------- | ------------------- | --------------------------------------- | ---------- |
| `config`        | `FaceFilterConfig`  | Configuration for face filtering.       | *required* |
| `face_reid_map` | `Optional[FaceMap]` | The map of face IDs to face attributes. | `None`     |

#### analyze(result) <a href="#analyze" id="analyze"></a>

`analyze(result)`

Analyze inference result and mark faces for reID processing

## FacePropertiesSmoothingAnalyzer <a href="#facepropertiessmoothinganalyzer" id="facepropertiessmoothinganalyzer"></a>

`FacePropertiesSmoothingAnalyzer`

Bases: `ResultAnalyzerBase`

Analyzer to persist face optional properties (gender, age, emotion, etc.) across frames.

Face optional properties are generated by the ReID model, which may not run on every frame due to ReID filtering. This analyzer caches the most recently seen properties for each tracked face and injects them into frames where the ReID model did not run, so that downstream consumers always see up-to-date attribute data.

### FacePropertiesSmoothingAnalyzer Methods <a href="#facepropertiessmoothinganalyzer-methods" id="facepropertiessmoothinganalyzer-methods"></a>

#### \_\_init\_\_(smoothing\_factor=0.8) <a href="#init" id="init"></a>

`__init__(smoothing_factor=0.8)`

Constructor.

Parameters:

| Name               | Type    | Description                                                                                                | Default |
| ------------------ | ------- | ---------------------------------------------------------------------------------------------------------- | ------- |
| `smoothing_factor` | `float` | IIR smoothing coefficient in \[0, 1) applied to smoothed keys ("age", "age\_sigma"). 0 means no smoothing. | `0.8`   |

#### analyze(result) <a href="#analyze" id="analyze"></a>

`analyze(result)`

Analyze inference result, caching new face properties and injecting cached ones.

For each tracked face and each property key individually

* If the key is present (non-None) in the result, call add() on the cached IIR object.
* If the key is absent, inject get() from the cached IIR object into the face result.

## FaceRecognizerGizmo <a href="#facerecognizergizmo" id="facerecognizergizmo"></a>

`FaceRecognizerGizmo`

Bases: `Gizmo`

Face detection gizmo that combines face extraction, reID inference, and embeddings database face search.

### FaceRecognizerGizmo Methods <a href="#facerecognizergizmo-methods" id="facerecognizergizmo-methods"></a>

#### \_\_init\_\_(face\_reid\_map, ...) <a href="#init" id="init"></a>

`__init__(face_reid_map, *, config, face_embedding_model, delete_expired_faces=True, accumulate_embeddings=False, credence_count=1, alert_mode=AlertMode.NONE, alert_once=True, stream_depth=10, allow_drop=False)`

Constructor.

Parameters:

| Name                    | Type                   | Description                                                  | Default    |
| ----------------------- | ---------------------- | ------------------------------------------------------------ | ---------- |
| `face_reid_map`         | `Optional[FaceMap]`    | The map of face IDs to face attributes.                      | *required* |
| `config`                | `FaceRecognizerConfig` | Configuration for face recognition.                          | *required* |
| `face_embedding_model`  |                        | The face embedding model for reID inference.                 | *required* |
| `delete_expired_faces`  | `bool`                 | Whether to delete expired faces from the map.                | `True`     |
| `accumulate_embeddings` | `bool`                 | Whether to accumulate embeddings in the face map.            | `False`    |
| `credence_count`        | `int`                  | Number of times the face is recognized before confirming it. | `1`        |
| `alert_mode`            | `AlertMode`            | Mode of alerting for the face search.                        | `NONE`     |
| `alert_once`            | `bool`                 | Whether to trigger the alert only once for the given face.   | `True`     |
| `stream_depth`          | `int`                  | Depth of the stream.                                         | `10`       |
| `allow_drop`            | `bool`                 | Whether to allow dropping frames.                            | `False`    |

#### extract\_embeddings(result) <a href="#extract_embeddings" id="extract_embeddings"></a>

`extract_embeddings(result)`

`staticmethod`

Extract face embeddings from the result.

Returns:

| Type      | Description                    |
| --------- | ------------------------------ |
| `ndarray` | np.ndarray: embeddings vector. |

#### get\_tags <a href="#get_tags" id="get_tags"></a>

`get_tags()`

Get list of tags assigned to this gizmo

#### require\_tags(inp) <a href="#require_tags" id="require_tags"></a>

`require_tags(inp)`

Get the list of meta tags this gizmo requires in upstream meta for a specific input.

Returns:

| Type        | Description                                                                       |
| ----------- | --------------------------------------------------------------------------------- |
| `List[str]` | List\[str]: Tags required by this gizmo in upstream meta for the specified input. |

#### run <a href="#run" id="run"></a>

`run()`

Run gizmo

## HeadPoseAnalyzer <a href="#headposeanalyzer" id="headposeanalyzer"></a>

`HeadPoseAnalyzer`

Bases: `ResultAnalyzerBase`

Analyzer to detect head pose from face landmarks.

Computes confidence scores (0.0 to 1.0) for the following head poses:

* Head Turned Left
* Head Turned Right
* Head Tilted Left
* Head Tilted Right
* Head Tilted Up

Adds pose scores to face\_properties dictionary in each face detection result.

### HeadPoseAnalyzer Methods <a href="#headposeanalyzer-methods" id="headposeanalyzer-methods"></a>

#### \_\_init\_\_ <a href="#init" id="init"></a>

`__init__()`

Constructor.

#### analyze(result) <a href="#analyze" id="analyze"></a>

`analyze(result)`

Analyze inference result and compute head pose confidence scores

## LivenessDetector <a href="#livenessdetector" id="livenessdetector"></a>

`LivenessDetector`

Bases: `ResultAnalyzerBase`

Analyzer to detect liveness by tracking facial feature changes over time.

Maintains trails of facial feature scores for each tracked face to enable liveness detection based on natural head movements and expressions.

It adds the following keys to each face detection result with active track ID

"liveness\_score": the computed liveness score based on feature changes "liveness\_trails": the history of facial feature scores for this face

### LivenessDetector Methods <a href="#livenessdetector-methods" id="livenessdetector-methods"></a>

#### \_\_init\_\_(\*, ...) <a href="#init" id="init"></a>

`__init__(*, trail_length=100, sensitivity=0.3, confidence_threshold=0.5, feature_list=None)`

Constructor.

Parameters:

| Name                   | Type             | Description                                                                                                                                                                                                                                     | Default |
| ---------------------- | ---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------- |
| `trail_length`         | `int`            | Maximum length of facial feature trails in frames. Defaults to 100.                                                                                                                                                                             | `100`   |
| `sensitivity`          | `float`          | Sensitivity parameter for liveness detection. Defaults to 0.3. Higher values make the detector more sensitive. sensitivity=1/N gives \~50% score at N changes in one feature trail.                                                             | `0.3`   |
| `confidence_threshold` | `float`          | Confidence threshold for liveness detection. Defaults to 0.5. Used for detecting facial feature changes by comparing feature scores against this threshold.                                                                                     | `0.5`   |
| `feature_list`         | `Optional[dict]` | Optional dict mapping feature names to holdoff values. Holdoff is the minimum number of samples to ignore after a crossing before counting a new one. If None, defaults to head turn features with 10% of trail\_length holdoff, others with 0. | `None`  |

#### analyze(result) <a href="#analyze" id="analyze"></a>

`analyze(result)`

Analyze inference result and track facial feature changes over time.

Adds the following keys to each face detection result with active track ID

"liveness\_score": the computed liveness score based on feature changes "liveness\_trails": the history of facial feature scores for this face
