Configuration

FaceRecognizerConfig Anatomy

FaceRecognizer is configured entirely through a FaceRecognizerConfig object with these components:

import degirum_face

config = degirum_face.FaceRecognizerConfig(
    face_detection_model_spec=detector_spec,   # 1. Detection model
    face_embedding_model_spec=embedding_spec,  # 2. Embedding model
    db_path="./face_db.lance",                 # 3. Database path
    cosine_similarity_threshold=0.6,           # 4. Matching threshold
    face_filters=filter_config,                # 5. Quality filters (optional)
)

1. Face Detection Model Spec

Specifies which model detects faces and their bounding boxes.

Default: Auto-selected for TFLITE/CPU running locally

detector_spec = degirum_face.get_face_detection_model_spec(
    device_type="HAILORT/HAILO8",
    inference_host_address="@cloud"
)

2. Face Embedding Model Spec

Specifies which model extracts face embeddings for matching.

Default: Auto-selected for TFLITE/CPU running locally

3. Database Path

Where to store enrolled face embeddings (LanceDB file).

Default: ./face_recognition.lance

4. Similarity Threshold

Minimum cosine similarity (0.0-1.0) to consider two faces a match.

Default: 0.6

5. Face Filters

Quality gates to skip low-quality detections. See Face Filters Reference.

Model Specs Explained

A ModelSpec tells degirum-face which model to load and where to run it. You have two options:

The degirum-face model registry provides pre-optimized models for all supported hardware. Use helper functions to automatically select the best model:

Parameters:

  • device_type - Hardware accelerator (see Basic Concepts for all options)

  • inference_host_address - Inference location: @cloud, @local, or AI server address (see Basic Concepts)

Option 2: Bring Your Own Models

For complete customization (using models outside the registry), create custom ModelSpec objects directly. See ModelSpec Documentationarrow-up-right for details.

Similarity Threshold Tuning

The similarity threshold controls match strictness:

Threshold Guide

Threshold
Behavior
Use Case

0.30-0.40

Very lenient

Maximum recall, accept some false positives

0.50-0.60

Balanced

General use (recommended starting point)

0.65-0.75

Strict

High precision, minimize false positives

0.80+

Very strict

Security applications

Tuning Strategy

  1. Start with 0.50 - Good balance for most cases

  2. Test with real data - Process representative images

  3. Adjust based on results:

    • Too many false positives? Increase threshold

    • Missing valid matches? Decrease threshold

  4. Consider use case:

    • Access control: Higher threshold (0.65-0.75)

    • Photo organization: Lower threshold (0.45-0.55)

Database Path

Enrolled face embeddings are stored in a LanceDB database file specified by db_path:

Important: degirum-face enforces that a database created with one hardware type cannot be used with a different hardware type, as embeddings may vary between accelerators. The system will throw an error if you attempt to mix hardware types with the same database.

YAML Configuration

FaceRecognizerConfig can be initialized from a YAML file or string using the from_yaml() method. This approach separates configuration from code, making it easier to version control settings, share configurations across teams, and maintain different configs for development, staging, and production environments.

Creating a YAML Config

face_config.yaml:

Loading from YAML

Returns:

  • config - Initialized FaceRecognizerConfig object

  • settings - Raw dictionary (useful for debugging)

Loading from YAML String

Benefits of YAML

  • Clean separation - Config separate from code

  • Easy modification - Change hardware without editing code

  • Version control - Track config changes in git

  • Team collaboration - Share standardized configs

  • Multiple environments - dev.yaml, staging.yaml, prod.yaml

Configuration Examples

Basic - Default Configuration

Equivalent to:

Defaults:

  • Hardware: TFLITE/CPU

  • Inference: @local

  • Database: ./face_reid_db.lance

  • Threshold: 0.6

Cloud Experimentation

Try different hardware without local setup:

Local Edge Deployment

Run on local Hailo-8 accelerator:

Remote Inference Server

Connect to dedicated AI server:

With Face Filters

Add quality filtering:

Last updated

Was this helpful?