Face Filters

Overview

Face filters act as quality gates that skip low-quality detections before running the embedding model. Proper filtering improves both accuracy and performance by avoiding incorrect results (e.g., computing embeddings for non-frontal faces would incorrectly identify them as unknown persons) and reducing unnecessary computation.

Why Use Filters?

Not every detected face should be processed:

  • Small/distant faces - Too few pixels for reliable recognition

  • Profile/side views - Embedding models work best on frontal faces

  • Poor framing - Faces cut off at edges produce unreliable embeddings

  • Outside region of interest - Ignore faces in irrelevant areas

Filters improve result quality and prevent wasted compute.

FaceFilterConfig

All filters are controlled through a FaceFilterConfig object:

import degirum_face

filters = degirum_face.FaceFilterConfig(
    # Small face filter
    enable_small_face_filter=True,
    min_face_size=50,
    
    # Zone filter
    enable_zone_filter=True,
    zone=[[100, 100], [500, 100], [500, 400], [100, 400]],
    
    # Geometric filters
    enable_frontal_filter=True,
    enable_shift_filter=True,
)

# Use in configuration
config = degirum_face.FaceRecognizerConfig(
    face_filters=filters,
    # ... other config
)

Filter Types

1. Small Face Filter

Skips faces where the bounding box is too small for reliable recognition.

Configuration

Parameters

  • enable_small_face_filter (bool) - Enable/disable the filter (default: False)

  • min_face_size (int) - Minimum size in pixels for the shorter side of the bounding box (default: 0)

When to Use

  • Processing images with varying camera distances

  • Ignore distant/background people

  • Improve accuracy by filtering unreliable small detections

  • Access control systems: Set min_face_size to 2/3 of frame size to trigger recognition only when a person is very close to the camera, eliminating issues with multiple people in frame and ambiguous triggering timing

Trade-offs

  • Higher threshold (60-80): Faster processing, miss distant faces

  • Lower threshold (30-40): More coverage, slower processing

  • Very high (200-400): Maximum quality for close-up enrollment (e.g., access control)

Example


2. Zone Filter

Only processes faces within a specified polygon region.

Configuration

Parameters

  • enable_zone_filter (bool) - Enable/disable the filter (default: False)

  • zone (list of [x, y]) - Polygon vertices defining the region of interest. Can be any polygon with 3 or more points (triangle, quadrilateral, pentagon, etc.), not limited to rectangles

How It Works

Face center point must be inside the polygon zone. Faces outside are skipped.

When to Use

  • Focus on specific areas (doorway, checkout counter, entrance)

  • Ignore people outside region of interest

  • Reduce false positives from background activity

Examples

Rectangular zone:

Arbitrary polygon:

Entire frame (no filtering):


3. Frontal Filter

Only processes faces looking roughly toward the camera (frontal view).

Configuration

Parameters

  • enable_frontal_filter (bool) - Enable/disable the filter (default: False)

How It Works

Checks if nose keypoint is inside the rectangle formed by eyes and mouth. Profiles/side views fail this test.

When to Use

  • Need high-quality embeddings (frontal faces work best)

  • Access control where users face the camera

  • Reduce processing of profile/side views

  • Improve recognition accuracy

Trade-offs

  • Enabled: Better quality, miss non-frontal faces

  • Disabled: Process all angles, lower quality for profiles

Example


4. Shift Filter

Skips faces that are poorly framed (cut off at image edges or off-center).

Configuration

Parameters

  • enable_shift_filter (bool) - Enable/disable the filter (default: False)

How It Works

Rejects faces where all 5 facial keypoints are clustered to one side of the bounding box - either all in the left/right half (horizontal) or all in the top/bottom half (vertical). This indicates the face is cut off or poorly framed.

When to Use

  • Avoid processing partially visible faces

  • Improve embedding quality by filtering edge cases

  • Video scenarios where people enter/exit frame

  • Ensure complete face is visible

Trade-offs

  • Enabled: Higher quality, miss partially visible faces

  • Disabled: Process all detections, some may be cut off

Example


5. ReID Expiration Filter

Note: This filter is specific to FaceTracker and video tracking workflows. It does not affect FaceRecognizer which processes static images.

Reduces embedding extraction frequency using adaptive exponential backoff for continuously tracked faces.

Configuration

Parameters

  • enable_reid_expiration_filter (bool) - Enable/disable the ReID expiration filter (default: False)

  • reid_expiration_frames (int) - Maximum interval in frames between embedding extractions for stable tracks (default: 10)

How It Works

When enabled, the filter adaptively increases the interval between embedding extractions for continuously tracked faces:

Result: For a face tracked over 100 frames, extracts ~7 embeddings instead of 100 (14x reduction).

When to Use

  • Real-time video tracking with FaceTracker

  • Reduce computational cost for stable, continuously tracked faces

  • Maintain accuracy while improving performance

Tuning reid_expiration_frames

  • Static scenes (office entry, checkpoint): reid_expiration_frames=60 - Stable faces, can wait longer between embeddings

  • Dynamic scenes (retail, crowds): reid_expiration_frames=15 - Quick movements, need more frequent updates

  • Recommended: 30 frames (~1 second at 30 FPS) for balanced performance. Default is 10 frames

When Embedding Extraction Happens

  • New track detected (first frame)

  • Expiration timer reached (adaptive interval: 1, 2, 4, 8... up to max)

  • Track ID re-acquired after loss

  • Quality filters passed after previous failure

Trade-offs

  • Higher value (60+): Fewer embeddings, faster FPS, slower response to face angle changes

  • Lower value (10-20): More embeddings, slower FPS, quicker response to movement

Example

Important: This filter only works with FaceTracker for continuous video streams. It has no effect on FaceRecognizer.predict_batch() since there are no persistent track IDs across batch items.


Combining Filters

Filters work in conjunction - a face must pass ALL enabled filters to be processed.

Strict Filtering

For high-quality, reliable results:

Use for: Access control, security applications, enrollment

Balanced Filtering

For general use:

Use for: Photo organization, general recognition

Permissive Filtering

For maximum coverage:

Use for: Photo search, surveillance (wide coverage)


Configuration Methods

Python Configuration


Use Case Recommendations

Access Control / Security

Video Surveillance

Photo Organization

Maximum Coverage


Filter Tuning Guide

  1. Start with balanced defaults:

    • min_face_size=50

    • enable_frontal_filter=True

    • Other filters disabled

  2. Test with real data:

    • Process representative images

    • Check what's being filtered

    • Measure accuracy and performance

  3. Adjust based on results:

    • Too many false positives? Enable more filters or increase thresholds

    • Missing valid faces? Relax filters or lower thresholds

    • Too slow? Increase min_face_size to skip more faces

  4. Consider your use case:

    • Security: Strict filtering

    • General use: Balanced filtering

    • Search/discovery: Permissive filtering

Last updated

Was this helpful?