Inference results

Understand the structure and purpose of the InferenceResults object returned by model inference. Learn how each field supports visualization, inspection, saving, or real-time streaming.

Estimated read time: 1 minute

Every call to model(...) returns an InferenceResults object. The same object appears when you iterate over predict_batch, predict_dir, or predict_stream. Understanding its fields helps you decide the right next step—whether that's inspecting detections, visualizing them, exporting structured data, or streaming results elsewhere.

Here's what you typically get when you run dir(result):

['image', 'image_model', 'image_overlay', 'info', 'results', 'timing']

Field-by-field guide

  • results: structured predictions (labels, scores, boxes, etc.) → Inspecting results

  • image, image_model, image_overlay: input frame, model-ready tensor (when exposed), and annotated overlay → Visualizing results

  • info: optional metadata you attach to frames; timing: per-stage latency metrics → Inspecting results covers both

  • Save to disk: structured exports and overlays → Saving results

  • Stream in real time: iterate and publish continuously → Streaming results

Pick the path that fits your workflow—or follow each guide to get the full picture of what InferenceResults can do.

Last updated

Was this helpful?