Methods
Note: Examples in this guide assume you have already configured FaceTrackerConfig with model specifications. See Configuration Guide for complete setup details.
Methods Overview
start_face_tracking_pipeline()
Real-time monitoring with automated alerting
Security surveillance, access control, live monitoring
predict_batch()
Stream processing with programmatic access
Custom analytics, logging, integration with other systems
find_faces_in_file()
Analyze local video files
Post-incident review, enrollment from footage
find_faces_in_clip()
Analyze cloud storage clips
Review alert clips, batch processing from S3
enroll()
Add faces from video to database
Build database from video footage
start_face_tracking_pipeline()
Run continuous real-time video monitoring with automated alerting, clip recording, and live streaming.
When to use: This is the primary method for production deployments where you need a fully automated system that runs continuously, handles alerts, saves video clips, and sends notifications. While you can attach a custom sink to access results programmatically, the main value of this method is the automated handling—use predict_batch() if programmatic result processing is your primary goal.
Signature
start_face_tracking_pipeline(
frame_iterator: Optional[Iterable] = None,
sink: Optional[SinkGizmo] = None,
sink_connection_point: str = "detector"
) -> Tuple[Composition, Watchdog]Parameters
frame_iterator(Optional) - Custom frame source; if None, usesconfig.video_sourcesink(Optional) - Custom output sink for resultssink_connection_point(str) - Where to attach the sink in the pipeline:"detector"(default) - Attach after step #3 (Object Tracking) - receives raw tracking results before filtering and recognition"recognizer"- Attach after step #7 (Database Search) - receives final recognition results with identified faces
Returns
Composition- Pipeline composition object (call.wait()to run). See Composition Documentation for detailsWatchdog- Monitoring object for pipeline health. See Watchdog Documentation for details
Example: Security Monitoring
Monitor RTSP camera, save clips, and send notifications:
How the Pipeline Works
Frame Acquisition - Read frame from video source
Face Detection - Detect faces with landmarks
Object Tracking - Assign persistent track IDs
Face Filtering - Apply quality filters (small face, frontal, zone, shift, ReID expiration)
Face Extraction - Crop and align faces
Embedding Extraction - Generate embeddings (adaptive backoff via ReID filter)
Database Search - Match embeddings against enrolled faces
Alert Evaluation - Check credence count and alert mode
Clip Recording - Save video clips when alerts trigger
Notification - Send alerts via configured channels
Live Streaming - Output annotated video to display/RTSP
predict_batch()
Process video streams and return inference results for each frame with programmatic access.
When to use: Use this when you need fine-grained control over the processing pipeline. Unlike start_face_tracking_pipeline(), this method gives you access to results for every frame, allowing you to implement custom logic, logging, integration with other systems, or build your own alerting mechanisms.
Signature
Parameters
stream- Iterator yielding video frames as numpy arrays
Returns
Iterator of
InferenceResultsobjects. InferenceResults objects support standard PySDK methods likeimage_overlay(),results, etc. See InferenceResults documentation
Each result contains:
results- List of detection dictionaries with tracking data (access viaresult.results[i].get("track_id"))facesproperty - List ofFaceRecognitionResultobjects. See FaceRecognitionResult Reference
Note: Tracking data (track_id, frame_id) is in the results list, not as properties of FaceRecognitionResult. To correlate, use the same index for both lists.
Example: Custom Logging
Process video and log all face detections:
find_faces_in_file()
Analyze local video files to find and track all faces, optionally creating annotated output.
When to use: This is the go-to method for offline analysis of recorded video. Use it for post-incident review, enrolling faces from existing footage, or batch processing local video files. The method automatically handles tracking, clusters similar faces, and produces annotated videos for easy review.
Signature
Parameters
file_path(str) - Path to input video filesave_annotated(bool) - Whether to save annotated video (default: True)output_video_path(Optional[str]) - Path for annotated output videocompute_clusters(bool) - Whether to compute K-means clustering on embeddings (default: True)
Returns
Dict[int, FaceAttributes]- Dictionary mapping track IDs to face data:Each
FaceAttributesobject contains embeddings and attributes (if recognized)Track IDs are persistent across frames
Embeddings are clustered if
compute_clusters=True
Example: Review and Enroll
Analyze video, review faces, and enroll new people:
find_faces_in_clip()
Analyze video clips from object storage (S3 or local), similar to find_faces_in_file() but for cloud/storage-based clips.
When to use: Use this to review video clips saved by start_face_tracking_pipeline() or stored in S3/object storage. It's particularly useful for reviewing alert clips generated by the automated pipeline, allowing you to verify detections and enroll faces from those incidents.
Signature
Parameters
clip_object_name(str) - Name of video clip in object storagesave_annotated(bool) - Whether to save annotated video back to storage (default: True)compute_clusters(bool) - Whether to compute K-means clustering on embeddings (default: True)
Returns
Dict[int, FaceAttributes]- Dictionary mapping track IDs to face data (same asfind_faces_in_file())
Requirements
clip_storage_configmust be configured inFaceTrackerConfig
Example: Review Alert Clips
Process unknown person clips from cloud storage:
Note: Annotated videos are saved with _annotated suffix in the same storage location.
enroll()
Enroll face(s) into the database using embeddings extracted from video analysis.
When to use: Use this after analyzing videos with find_faces_in_file() or find_faces_in_clip() to add new individuals to your database. This approach leverages video footage to capture multiple angles and expressions, creating more robust face profiles than single-image enrollment.
Signature
Parameters
face_list- SingleFaceAttributesobject or list of objects to enrollMust have
attributesproperty set (e.g., person name)Must contain
embeddings(populated byfind_faces_in_file()orfind_faces_in_clip())
Workflow
Run
find_faces_in_file()orfind_faces_in_clip()to extract facesAssign
attributes(person name) to each face you want to enrollCall
enroll()with the face data
Example: Interactive Enrollment
Notes:
Faces without
attributesare skipped with a warningEach face can have multiple embeddings (from different frames/angles)
K-means clustering reduces embeddings to representative samples
More samples = better accuracy (recommended: 20+ frames)
Last updated
Was this helpful?

