# Class filtering

*Estimated read time: 4 minutes*

Models are often trained on many labels (e.g., COCO’s 80), but your application may only need a subset.

Use `model.label_dictionary` to see exactly which labels a model predicts (for detection, classification, or segmentation). Then, optionally restrict outputs to the classes you care about using `output_class_set`—reducing clutter and simplifying downstream logic.

## Inspect labels without filtering

<figure><img src="https://1657109811-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaQsuJ8iXkszyOgIpNy8w%2Fuploads%2Fgit-blob-b69b8699f9b373378c1eaf2994e3e4e57d0fe1fa%2Fhailo-cookbook--bikes--two-people-on-a-park-bench-with-bicycles-labeled-person-bicycle-and-bench.jpg?alt=media" alt="Two people sitting on a bench next to two bicycles. The people, bench, and bicycles are labeled."><figcaption><p>Two people sitting on a bench next to two bicycles. The people, bench, and bicycles are labeled.</p></figcaption></figure>

Use this baseline run to confirm label names before filtering. Knowing the exact strings ensures you pass the right values to `output_class_set` later.

### Example

{% code overflow="wrap" %}

```python
from degirum_tools import ModelSpec, Display, remote_assets

# Describe and load the model (no filtering)
model_spec = ModelSpec(
    model_name="yolov8n_coco--640x640_quant_hailort_multidevice_1",
    zoo_url="degirum/hailo",
    inference_host_address="@local",
    model_properties={
        "device_type": ["HAILORT/HAILO8", "HAILORT/HAILO8L"],
    },
)
model = model_spec.load_model()

# Discover available labels
labels = model.label_dictionary
print(f"Model predicts these {len(labels)} labels:", labels)

# Run on an image
image_source = remote_assets.urban_park_elephants
result = model(image_source)

# Visualize (shows all detected classes)
with Display("All classes (press 'q' to exit)") as output_display:
    output_display.show_image(result.image_overlay)
```

{% endcode %}

Expect console output similar to the snippet below.

### Example

{% code overflow="wrap" %}

```python
Model predicts these 80 labels. {0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'}
```

{% endcode %}

## Filter to a subset of classes

<figure><img src="https://1657109811-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FaQsuJ8iXkszyOgIpNy8w%2Fuploads%2Fgit-blob-c25a9648ebc11744d3f12a06f2fe1ed10aba28d4%2Fhailo-cookbook--bikes--two-people-on-a-park-bench-with-only-the-bicycles-labeled-bicycle.jpg?alt=media" alt="Two people sitting on a bench next to two bicycles. The bicycles are labeled."><figcaption><p>Two people sitting on a bench next to two bicycles. The bicycles are labeled.</p></figcaption></figure>

Apply class filtering to focus overlays and downstream logic on just the labels you need. Adjust the set before loading the model so every result reports only the chosen classes.

### Example

{% code overflow="wrap" %}

```python
from degirum_tools import ModelSpec, Display, remote_assets

# Choose the classes you want to keep in the outputs
classes_to_keep = {"bicycle"}  # e.g., {"person", "car"}

# Describe and load the model with class filtering
filtered_spec = ModelSpec(
    model_name="yolov8n_coco--640x640_quant_hailort_multidevice_1",
    zoo_url="degirum/hailo",
    inference_host_address="@local",
    model_properties={
        "device_type": ["HAILORT/HAILO8", "HAILORT/HAILO8L"],
        "output_class_set": classes_to_keep,
    },
)
filtered_model = filtered_spec.load_model()

# Run on an image
image_source = remote_assets.bikes
filtered_result = filtered_model(image_source)

# Visualize (overlay now includes only the filtered classes)
with Display("Filtered classes (press 'q' to exit)") as output_display:
    output_display.show_image(filtered_result.image_overlay)
```

{% endcode %}

{% hint style="info" %}

* `model.label_dictionary` lists the label names the model can predict—use it to confirm exact strings (e.g., "bicycle" vs "bike").
* `output_class_set` accepts a set or list of label strings. Omit it (or set to `None`) to return all classes.
* Works the same for detection and segmentation models: non-selected classes are removed from boxes, masks, and overlays.
  {% endhint %}
