Methods

Note: Examples in this guide assume you have already configured LicensePlateRecognizerConfig with model specifications. See Configuration Guide for complete setup details.

Methods Overview

Method
Purpose
Input
Best For

predict()

Recognize plates in one image

One image

Single photo processing

predict_batch()

Recognize plates in multiple images

Image iterator

Video/batch processing

Performance tip: Use predict_batch() for multiple items - it provides ~2-3x speedup through pipeline parallelism.

InferenceResults

predict() and predict_batch() return DeGirum InferenceResults objects with a .license_plates property containing detected plates.

Key property: .license_plates - List of LPRResult objects, each with plate_number (text), ocr_score (OCR confidence), detection_score (detection confidence), bbox (bounding box)

See InferenceResults documentationarrow-up-right for standard PySDK methods like image_overlay(), results, etc.


predict()

Recognize license plates in a single image.

Signature

predict(frame: Any) -> degirum.postprocessor.InferenceResults

Parameters

  • frame - Image as numpy array or file path (str)

Returns

How It Works

  1. Detects all license plates in the image

  2. Crops each detected plate region

  3. Runs OCR on each cropped plate

  4. Returns results with recognized text in label field

Examples

Recognize from file path:

Recognize from numpy array:

Using demo image:

Best Practices

  • Use clear images - Better lighting and focus improve OCR accuracy

  • Frame the plate - Closer views of plates work better

  • Check confidence scores - Filter results by plate.ocr_score threshold


predict_batch()

Recognize license plates in multiple images efficiently.

Signature

Parameters

  • frames - Iterator yielding frames as numpy arrays or file paths

Returns

  • Iterator yielding degirum.postprocessor.InferenceResults objects with .license_plates property

How It Works

  1. Processes all images through detection → cropping → OCR pipeline

  2. Pipeline parallelism makes this faster than calling predict() repeatedly

  3. Yields results as they're processed

Examples

Process multiple images:

Process video frames:

Filter by confidence:

Best Practices

  • Use iterators - Pass iterators (not lists) for memory efficiency

  • Pipeline parallelism - predict_batch() is 2-3x faster than multiple predict() calls

  • Process video frames - Ideal for frame-by-frame video analysis

  • Filter results - Apply confidence thresholds to get high-quality detections

Complete Example

Last updated

Was this helpful?