Inspecting results
Understand the structure of PySDK inference results so you can inspect labels, scores, and metadata before visualizing, saving, or streaming them.
Estimated read time: 3 minutes
Every inference call returns an InferenceResults object. This page shows how to explore result.results, check metadata, and prepare the data for downstream logic such as filtering or aggregation. Each example below loads the model inline so you can copy and run sections independently.
Check the overall structure
InferenceResults exposes high-level fields via attributes. Use dir(result) or directly access result.results, result.info, and result.timing.
Example
from degirum_tools import ModelSpec, remote_assets
model_spec = ModelSpec(
model_name="yolov8n_coco--640x640_quant_axelera_metis_1",
zoo_url="degirum/axelera",
inference_host_address="@local",
model_properties={"device_type": ["AXELERA/METIS"]},
)
model = model_spec.load_model()
result = model(remote_assets.three_persons)
print([attr for attr in dir(result) if not attr.startswith("_")])
print(type(result.results), len(result.results))Example output:
['image', 'image_model', 'image_overlay', 'info', 'results', 'timing']
<class 'list'> 3Detection models return a list of dictionaries with keys such as bbox, label, and score. Classification models return ranked predictions; segmentation models can include masks. The exact schema depends on the postprocessor configured in your ModelSpec.
Inspect detection outputs
Loop through result.results to extract bounding boxes and confidence scores. Convert floating-point values with float() if you plan to serialize or log them.
Example
from degirum_tools import ModelSpec, remote_assets
model_spec = ModelSpec(
model_name="yolov8n_coco--640x640_quant_axelera_metis_1",
zoo_url="degirum/axelera",
inference_host_address="@local",
model_properties={"device_type": ["AXELERA/METIS"]},
)
model = model_spec.load_model()
result = model(remote_assets.three_persons)
for detection in result.results:
label = detection.get("label", "unknown")
score = float(detection.get("score", 0))
bbox = detection.get("bbox") # [x_min, y_min, x_max, y_max]
print(f"{label}: {score:.3f} box={bbox}")
people = [det for det in result.results if det.get("label") == "person"]
print(f"Detected {len(people)} person(s)")Example output:
person: 0.901 box=[0.13, 0.23, 0.35, 0.88]
person: 0.871 box=[0.52, 0.18, 0.75, 0.87]
person: 0.678 box=[0.74, 0.28, 0.96, 0.91]
Detected 3 person(s)Read metadata and timing
When you pass metadata to predict_batch or predict_stream, it is surfaced via result.info. Timing data is recorded when you enable profiling in the model properties.
Example
from degirum_tools import ModelSpec, remote_assets
model_spec = ModelSpec(
model_name="yolov8n_coco--640x640_quant_axelera_metis_1",
zoo_url="degirum/axelera",
inference_host_address="@local",
model_properties={
"device_type": ["AXELERA/METIS"],
"postprocess": {"timing": {"enable": True}},
},
)
model = model_spec.load_model()
batch_results = list(
model.predict_batch(
[remote_assets.three_persons],
info_list=[{"camera_id": "demo-cam-01", "frame": 7}],
)
)
result = batch_results[0]
print("Frame metadata:", result.info)
print("Timing (ms):", result.timing)Frame metadata: {'camera_id': 'demo-cam-01', 'frame': 7}
Timing (ms): {'preprocess': 2.87, 'inference': 34.68, 'postprocess': 4.09}Last updated
Was this helpful?

