Saving results

Capture inference outputs as structured data or images so you can reuse them in downstream tools, dashboards, or datasets.

Estimated read time: 3 minutes

Saving results is part organization, part serialization. This page walks through exporting structured detections, writing overlays to disk, and handling batched outputs. Each section includes its own setup so you can copy and run examples independently.

Save structured detections as JSON

Convert inference data into plain Python dictionaries before serialization. This avoids issues with NumPy types.

Example

from pathlib import Path
import json
from degirum_tools import ModelSpec, remote_assets

model_spec = ModelSpec(
    model_name="yolov8n_coco--640x640_quant_hailort_multidevice_1",
    zoo_url="degirum/hailo",
    inference_host_address="@local",
    model_properties={"device_type": ["HAILORT/HAILO8L", "HAILORT/HAILO8"]},
)
model = model_spec.load_model()

result = model(remote_assets.three_persons)

output_dir = Path("saved-results")
output_dir.mkdir(parents=True, exist_ok=True)

def detection_to_dict(det):
    return {
        "label": det.get("label"),
        "score": float(det.get("score", 0)),
        "bbox": [float(x) for x in det.get("bbox", [])],
        "category_id": det.get("category_id"),
    }

json_path = output_dir / "three-persons.json"
with json_path.open("w", encoding="utf-8") as f:
    json.dump([detection_to_dict(det) for det in result.results], f, indent=2)

print(f"Wrote {json_path}")

Example output:

Need to capture metadata for compliance or analytics? Append additional keys (e.g., timestamps or camera IDs) before writing the JSON file.

Export detections as CSV

CSV exports integrate well with spreadsheets and BI tools. Flatten bounding boxes into separate columns for easier filtering.

Example

Example output:

Save overlay images

Use OpenCV to create PNGs you can share or archive. Swap in result.image to capture the pre-annotation frame.

Example

Example output:

Guard against result.image_model being None before attempting to save it—some models do not expose the model-ready tensor.

Batch saves with predict_dir

predict_dir yields (image_path, inference_results) tuples. Use the filenames to generate deterministic output names and reuse the JSON helper from above.

Example

Example output:

Last updated

Was this helpful?