# Command Line Interface

During PySDK installation, the `degirum` executable [console script](https://setuptools.pypa.io/en/latest/userguide/entry_point.html#console-scripts) is added to the system path. This script provides a command-line interface (CLI) for PySDK management tasks and extends functionality through [entry points](https://setuptools.pypa.io/en/latest/userguide/entry_point.html#entry-points).

The PySDK CLI supports the following commands:

| Command                             | Description                                   |
| ----------------------------------- | --------------------------------------------- |
| [download-zoo](#download-model-zoo) | Download AI models from the cloud model zoo   |
| [install-runtime](#install-runtime) | Install runtime libraries for AI accelerators |
| [server](#server-control)           | Control operation of the AI server            |
| [sys-info](#system-info)            | Get system information dump                   |
| [token](#manage-ai-hub-tokens)      | Manage AI Hub tokens                          |
| [trace](#manage-tracing)            | Manage AI server tracing                      |
| version                             | Print PySDK version                           |

Invoke the console script with one of the commands above, followed by its parameters.

{% code overflow="wrap" %}

```
degirum <command> <arguments>
```

{% endcode %}

## Download Model Zoo

Command: **download-zoo**

Using this command you can download ML models from the cloud model zoo of your choice specified by URL. The command has the following parameters:

| Parameter        | Description                                                            | Possible Values                                                                                                         | Default                     |
| ---------------- | ---------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- | --------------------------- |
| `--path`         | Local filesystem path to store models downloaded from a model zoo repo | Valid local directory path                                                                                              | Current directory           |
| `--url`          | Cloud model zoo URL                                                    | `"https://hub.degirum.com/[<zoo URL>]"`                                                                                 | `"https://hub.degirum.com"` |
| `--token`        | Cloud API access token                                                 | Valid token obtained at hub.degirum.com                                                                                 | Empty                       |
| `--model_family` | Model family name filter: model name substring or regular expression   | Any                                                                                                                     | Empty                       |
| `--device`       | Target inference device filter                                         | `ORCA, CPU, GPU, EDGETPU, MYRIAD, DLA, DLA_FALLBACK, NPU, RK3588, RK3566, RK3568, NXP_VX, NXP_ETHOSU, ARMNN, VITIS_NPU` | Empty                       |
| `--runtime`      | Runtime agent type filter                                              | `N2X, TFLITE, TENSORRT, OPENVINO, ONNX, RKNN`                                                                           | Empty                       |
| `--precision`    | Model calculation precision filter                                     | `QUANT, FLOAT`                                                                                                          | None                        |
| `--pruned`       | Model density filter                                                   | `PRUNED, DENSE`                                                                                                         | None                        |

The URL parameter uses the form `"https://hub.degirum.com/<zoo URL>"`, where `<zoo URL>` is `<workspace>/<zoo>`. To find this suffix, go to the AI Hub, select the desired zoo, and click the copy button next to its name.

Filter parameters work the same way as in [degirum.zoo\_manager.ZooManager.list\_models](https://docs.degirum.com/pysdk/api-ref/zoo-manager#degirum.zoo_manager.zoomanager.list_models) and let you download only models that satisfy the filter conditions.

Once models are downloaded into the directory specified by `--path` parameter, you may use this directory as the model zoo to be served by AI server (see [Server Control Command](#server-control-command) section).

**Example:**

Download models for ORCA device type from DeGirum Public cloud model zoo into `./my-zoo` directory.

{% code overflow="wrap" %}

```
degirum download-zoo --path ./my-zoo --token <your cloud API access token> --device ORCA
```

{% endcode %}

Here `<your cloud API access token>` is your cloud API access token, which you can generate on the AI Hub.

## Install Runtime

Command: **install-runtime**

Use this command to install third-party AI accelerator runtime libraries on Debian-based systems. The command has the following parameters:

| Parameter         | Description                                               | Possible Values                        | Default          |
| ----------------- | --------------------------------------------------------- | -------------------------------------- | ---------------- |
| `--list`          | List available runtimes and their versions                | N/A                                    | Disabled         |
| `plugin_name`     | Runtime name to install                                   | One of the runtimes listed by `--list` | Required         |
| `plugin_versions` | Runtime version(s) to install; omit to install the latest | Specific version number(s) or `ALL`    | Latest available |

You can provide multiple `plugin_versions` to install several versions at once.

**Examples:**

List available runtimes and versions:

{% code overflow="wrap" %}

```
degirum install-runtime --list
```

{% endcode %}

Install the latest ONNX runtime:

{% code overflow="wrap" %}

```
degirum install-runtime onnx
```

{% endcode %}

Install a specific version of OpenVINO runtime:

{% code overflow="wrap" %}

```
degirum install-runtime openvino 2025.3.0
```

{% endcode %}

## Server Control

**server**

Use this command to start the AI server, shut it down, or ask it to rescan its local model zoo.

> You can control only an AI server running on the same host as this command. Remote control is disabled for security reasons.

This command has the following subcommands, which are passed just after the command:

| Sub-command  | Description                               |
| ------------ | ----------------------------------------- |
| `start`      | Start AI server                           |
| `rescan-zoo` | Request AI server to rescan its model zoo |
| `shutdown`   | Request AI server to shutdown             |
| `cache-dump` | Dump AI server inference agent cache info |

The command has the following parameters:

| Parameter    | Applicable To | Description                                                                            | Possible Values        | Default           |
| ------------ | ------------- | -------------------------------------------------------------------------------------- | ---------------------- | ----------------- |
| `--zoo`      | `start`       | Local model zoo directory to serve models from (applicable only to `start` subcommand) | Any valid path         | Current directory |
| `--quiet`    | `start`       | Do not display any output (applicable only to `start` subcommand)                      | N/A                    | Disabled          |
| `--port`     | `start`       | TCP port to bind AI server to                                                          | 1...65535              | 8778              |
| `--protocol` | `start`       | AI server protocol to use                                                              | `asio`, `http`, `both` | `asio`            |

Starting with PySDK 0.10.0, the AI server supports two protocols: `asio` and `http`. `asio` is DeGirum's custom socket-based protocol used in earlier versions. The new `http` protocol relies on REST HTTP and WebSockets, so you can use the AI server from any language that supports these standards. Browser-based JavaScript, for example, requires the `http` protocol because it lacks native socket support.

The `asio` protocol is selected by default. Use `--protocol http` to enable the `http` protocol or `--protocol both` to enable both. When both are enabled, the AI server listens on two consecutive ports: the first for `asio` and the second for `http`.

**Examples:**

Start AI server to serve models from `./my-zoo` directory, bind it to default port, and use `asio` protocol:

{% code overflow="wrap" %}

```
degirum server start --zoo ./my-zoo
```

{% endcode %}

Start AI server to serve models from `./my-zoo` directory, use `asio` protocol on port 12345, and use `http` protocol on port 12346:

{% code overflow="wrap" %}

```
degirum server start --zoo ./my-zoo --port 12345 --protocol both
```

{% endcode %}

## System Info

Command: **sys-info**

This command displays system information for the local host or a remote AI server.

The command has the following parameters:

| Parameter | Description                                                         | Possible Values                      | Default |
| --------- | ------------------------------------------------------------------- | ------------------------------------ | ------- |
| `--host`  | Remote AI server hostname or IP address; omit to query local system | Valid hostname, IP address, or empty | Empty   |

**Example:**

Query system info from remote AI server at IP address `192.168.0.101`:

{% code overflow="wrap" %}

```
degirum sys-info --host 192.168.0.101
```

{% endcode %}

## Manage AI Hub Tokens

Command: **token**

Use this command to manage AI Hub access tokens. When a token is managed by this command, then that token will be automatically provided to any function call that uses a token. This way, you will not need to assign `token` to anything in your PySDK code.

{% hint style="info" %}
The DEGIRUM\_CLOUD\_TOKEN environment variable is not set by this command.
{% endhint %}

> Tokens are stored in a JSON file in your user data directory (`%APPDATA%\\DeGirum` on Windows or `~/.local/share/DeGirum` on Linux and macOS). When a token is installed, PySDK uses it automatically.

This command has the following subcommands, which are passed just after the command:

| Sub-command | Description                                                               |
| ----------- | ------------------------------------------------------------------------- |
| `status`    | Show information about the installed token                                |
| `install`   | Save a provided token to local storage                                    |
| `clear`     | Remove the installed token from storage; does not forcibly expire the key |

The command has the following parameters:

| Parameter     | Applicable To   | Description                      | Possible Values                         | Default                   |
| ------------- | --------------- | -------------------------------- | --------------------------------------- | ------------------------- |
| `--token`     | `install`       | Token string to install          | Valid token obtained at hub.degirum.com | Empty                     |
| `--cloud_url` | All subcommands | Cloud server URL to operate with | `https://hub.degirum.com` or custom     | `https://hub.degirum.com` |
| `--local`     | All subcommands | Do not contact the cloud         | N/A                                     | Disabled                  |

**Examples:**

Install an existing token:

{% code overflow="wrap" %}

```
degirum token install --token <your_token>
```

{% endcode %}

Example output:

{% code overflow="wrap" %}

```
Token is successfully installed on your system
```

{% endcode %}

Check the status of the installed token:

{% code overflow="wrap" %}

```
degirum token status
```

{% endcode %}

Example output:

{% code overflow="wrap" %}

```
token: dg_***************************************
$schema: https://hub.degirum.com/schemas/GetTokenInfoOutputBody.json
created_at: 'YYYY-MM-DDTHH:MM:SSZ'
description: Demo token
value: dg_***************************************
expiration: '0001-01-01T00:00:00Z'
user: <user>
space: <workspace>
```

{% endcode %}

Clear the currently installed token:

{% code overflow="wrap" %}

```
degirum token clear
```

{% endcode %}

Example output:

{% code overflow="wrap" %}

```
Token is successfully cleared from your system
```

{% endcode %}

## Manage Tracing

Command: **trace**

Use this command to manage the AI server tracing feature.

> The tracing feature is primarily for debugging and profiling. It is mainly intended for DeGirum customer support and isn't typically used directly by end users.

This command has the following subcommands, which are passed just after the command:

| Sub-command | Description                             |
| ----------- | --------------------------------------- |
| `list`      | List all available trace groups         |
| `configure` | Configure trace levels for trace groups |
| `read`      | Read trace data to file                 |

The command has the following parameters:

| Parameter    | Applicable To   | Description                                                 | Possible Values                                                     | Default     |
| ------------ | --------------- | ----------------------------------------------------------- | ------------------------------------------------------------------- | ----------- |
| `--host`     | All subcommands | Remote AI server hostname or IP address                     | Valid hostname or IP address                                        | `localhost` |
| `--file`     | `read`          | Filename to save trace data into; omit to print to console  | Valid local filename                                                | Empty       |
| `--filesize` | `read`          | Maximum trace data size to read                             | Any integer number                                                  | `10000000`  |
| `--basic`    | `configure`     | Set `Basic` trace level for a given list of trace groups    | One or multiple trace group names as returned by `list` sub-command | Empty       |
| `--detailed` | `configure`     | Set `Detailed` trace level for a given list of trace groups | One or multiple trace group names as returned by `list` sub-command | Empty       |
| `--full`     | `configure`     | Set `Full` trace level for a given list of trace groups     | One or multiple trace group names as returned by `list` sub-command | Empty       |

**Examples:**

Query AI server at `192.168.0.101` address for the list of available trace groups and print it to console:

{% code overflow="wrap" %}

```
degirum trace list --host 192.168.0.101
```

{% endcode %}

Configure tracing for the AI server on `localhost` by setting trace levels for specific groups:

{% code overflow="wrap" %}

```
degirum trace configure --basic CoreTaskServer --detailed OrcaDMA OrcaRPC --full CoreRuntime
```

{% endcode %}

Read trace data from the AI server on `localhost` and save it to `./my-trace-1.txt`

{% code overflow="wrap" %}

```
degirum trace read --file ./my-trace-1.txt
```

{% endcode %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.degirum.com/pysdk/user-guide-pysdk/command-line-interface.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
