# Specifying a model

*Estimated read time: 3 minutes*

At the core of PySDK are models. Specifying a model means declaring its artifacts (such as compiled binaries, config files, labels, and any pre/post-processing resources), where those artifacts are stored, and where inference will run—along with optional runtime properties and metadata.

We package all of this into a reusable object: `ModelSpec`.

## Visualizing the ModelSpec

Here's an example of how to define and load a model:

{% code overflow="wrap" %}

```python
from degirum_tools import ModelSpec

# Example ModelSpec
model_spec = ModelSpec(
    model_name="yolov8n_coco--640x640_quant_axelera_metis_1",
    zoo_url="degirum/axelera",
    inference_host_address="@local",
    model_properties={"device_type": ["AXELERA/METIS"]},
    # metadata={"accuracy_top1": 0.55, "fps": 120}  # optional
)
# Loading a model
model = model_spec.load_model()
```

{% endcode %}

#### What each field means

* `model_name`: unique model identifier, pointing to the model’s JSON file and its artifacts
* `zoo_url`: where the model’s artifacts are stored (e.g., Model Zoo or local path)
* `inference_host_address`: where inference runs (cloud, local, or AI Server)
* `model_properties`: optional runtime parameters such as device type, thresholds, batch size
* `metadata`: optional dictionary with descriptive info like accuracy, model size, throughput/FPS, or notes

{% stepper %}
{% step %}
**What are the artifacts?** → `model_name`

Each model is defined by a JSON file named after it (`<model_name>.json`). This file:

* defines preprocessing and postprocessing parameters
* references compiled binaries and label files
* may point to optional postprocessing scripts (e.g., `postprocess.py`) or postprocessing settings
  {% endstep %}

{% step %}
**Where are the artifacts stored?** → `zoo_url`

Artifacts can be stored in:

* AI Hub: e.g., `degirum/axelera`
* Local folders: e.g., `file:///path/to/degirum-model-zoo/`
* AI server: e.g., `aiserver://`
  {% endstep %}

{% step %}
**Where will the model run?** → `inference_host_address`

The inference engine can run in different environments:

* `@cloud`: inference runs on AI Hub
* `@local`: inference runs on your machine
* `localhost`: local AI Server
* `<server-ip>:<port>`: remote AI Server
  {% endstep %}

{% step %}
**What are the model's properties?** → `model_properties` (optional)

Optional runtime parameters applied to the model.
{% endstep %}

{% step %}
**What metadata is associated with the model?** → `metadata` (optional)

Optional metadata dictionary for free-form model info, including:

* accuracy
* model size
* FPS
  {% endstep %}
  {% endstepper %}

See the [PySDK Core Concepts guide](https://docs.degirum.com/pysdk/user-guide-pysdk/core-concepts) for details on:

* the model JSON structure
* postprocessing files
* supported inference and storage combinations
