# First inference

*Estimated read time: 1 minute*

## Run this first

Don’t worry about the details yet. Just run this to see something work—we’ll explain what it does after.

{% code overflow="wrap" %}

```python
# Quick start: one image → inference → overlay
from degirum_tools import ModelSpec, remote_assets, Display

model_spec = ModelSpec(
    model_name="yolov8n_coco--640x640_quant_axelera_metis_1",
    zoo_url="degirum/axelera",
    inference_host_address="@local",
    model_properties={"device_type": ["AXELERA/METIS"]},
)
model = model_spec.load_model()

img = remote_assets.three_persons
res = model(img)

with Display("AI Camera") as output_display:
    output_display.show_image(res.image_overlay)
```

{% endcode %}

You should see an image with object detections overlayed:

<figure><img src="https://387437463-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fw4TFcrlOvSs7ZfsEpUnx%2Fuploads%2Fgit-blob-acb8dc8aa7cd0633eaedfa41128500f6c5625562%2Faxelera-cookbook--three_persons--three-people-at-a-crosswalk-labeled-person.jpg?alt=media" alt="Three people at a crosswalk labeled person."><figcaption><p>Three people at a crosswalk labeled person.</p></figcaption></figure>

## What just happened

* You specified a model (name and source)
* Loaded it into an inference runtime
* Ran inference on a test image
* Displayed the output using `image_overlay`
