Configuration

LicensePlateRecognizerConfig Anatomy

LicensePlateRecognizer is configured entirely through a LicensePlateRecognizerConfig object with these components:

import degirum_vehicle

config = degirum_vehicle.LicensePlateRecognizerConfig(
    license_plate_detection_model_spec=detection_spec,  # 1. Detection model
    license_plate_ocr_model_spec=ocr_spec,             # 2. OCR model
)

1. License Plate Detection Model Spec

Specifies which model detects license plate regions and their bounding boxes.

Default: TFLITE/CPU model on @cloud inference host

detection_spec = degirum_vehicle.get_license_plate_detection_model_spec(
    device_type="HAILORT/HAILO8",
    inference_host_address="@cloud"
)

2. License Plate OCR Model Spec

Specifies which model extracts text from detected license plate regions.

Default: TFLITE/CPU model on @cloud inference host

Model Specs Explained

A ModelSpec tells degirum-vehicle which model to load and where to run it.

Configuration Philosophy: You can configure models by explicitly specifying model specs (Options 2 & 3) or use the convenience method (Option 1) that automatically selects the right models for your hardware. Additionally, you can initialize from YAML files (Option 4) for production deployments.

You have four options to initialize the configuration:

The easiest approach - just specify your hardware and inference location. The model registry automatically selects the best detection and OCR models for you:

Other examples:

When to use: Most use cases - fastest way to get started without worrying about individual model selection.

Option 2: Use Model Registry Helper Functions

For more control, specify models individually using the registry helper functions:

Parameters:

  • device_type - Hardware accelerator (see supported hardware below)

  • inference_host_address - Inference location: @cloud, @local, or AI server address

Supported Hardware:

See the complete list of supported hardware platforms.

When to use: When you need different hardware/hosts for detection vs OCR models, or want explicit control over model selection.

Option 3: Bring Your Own Models

For complete customization (using models outside the registry), create custom ModelSpec objects directly. See ModelSpec Documentationarrow-up-right for details.

When to use: Custom-trained models, private model zoos, or models not in the official degirum-vehicle registry.


YAML Configuration (Option 4)

For production deployments and version-controlled configurations, initialize from YAML files:

Load from YAML:

When to use: Production environments, CI/CD pipelines, or when you need to version-control and share configurations across teams.

Example YAML: See lpr_recognition.yamlarrow-up-right for a complete configuration.


Configuration Examples

Basic - Default Configuration

Equivalent to:

Defaults:

  • Hardware: TFLITE/CPU

  • Inference: @cloud

Cloud Experimentation

Try different hardware without local setup:

Local Edge Deployment

Run on local Hailo-8 accelerator:

Remote Inference Server

Connect to dedicated AI server:

Mixed Hardware Setup

Use different hardware for detection and OCR:

Last updated

Was this helpful?