Command Line Interface
Learn how to use the PySDK command line interface to manage AI models, control your AI server, and streamline model downloads.
During PySDK installation, the degirum executable console script is added to the system path. This script provides a command-line interface (CLI) for PySDK management tasks and extends functionality through entry points.
The PySDK CLI supports the following commands:
Download AI models from the cloud model zoo
Install runtime libraries for AI accelerators
Control operation of the AI server
Get system information dump
Manage AI Hub tokens
Manage AI server tracing
version
Print PySDK version
Invoke the console script with one of the commands above, followed by its parameters.
degirum <command> <arguments>Download Model Zoo
Command: download-zoo
Using this command you can download ML models from the cloud model zoo of your choice specified by URL. The command has the following parameters:
--path
Local filesystem path to store models downloaded from a model zoo repo
Valid local directory path
Current directory
--url
Cloud model zoo URL
"https://hub.degirum.com/[<zoo URL>]"
"https://hub.degirum.com"
--token
Cloud API access token
Valid token obtained at hub.degirum.com
Empty
--model_family
Model family name filter: model name substring or regular expression
Any
Empty
--device
Target inference device filter
ORCA, CPU, GPU, EDGETPU, MYRIAD, DLA, DLA_FALLBACK, NPU, RK3588, RK3566, RK3568, NXP_VX, NXP_ETHOSU, ARMNN, VITIS_NPU
Empty
--runtime
Runtime agent type filter
N2X, TFLITE, TENSORRT, OPENVINO, ONNX, RKNN
Empty
--precision
Model calculation precision filter
QUANT, FLOAT
None
--pruned
Model density filter
PRUNED, DENSE
None
The URL parameter uses the form "https://hub.degirum.com/<zoo URL>", where <zoo URL> is <workspace>/<zoo>. To find this suffix, go to the AI Hub, select the desired zoo, and click the copy button next to its name.
Filter parameters work the same way as in degirum.zoo_manager.ZooManager.list_models and let you download only models that satisfy the filter conditions.
Once models are downloaded into the directory specified by --path parameter, you may use this directory as the model zoo to be served by AI server (see Server Control Command section).
Example:
Download models for ORCA device type from DeGirum Public cloud model zoo into ./my-zoo directory.
degirum download-zoo --path ./my-zoo --token <your cloud API access token> --device ORCAHere <your cloud API access token> is your cloud API access token, which you can generate on the AI Hub.
Install Runtime
Command: install-runtime
Use this command to install third-party AI accelerator runtime libraries on Debian-based systems. The command has the following parameters:
--list
List available runtimes and their versions
N/A
Disabled
plugin_name
Runtime name to install
One of the runtimes listed by --list
Required
plugin_versions
Runtime version(s) to install; omit to install the latest
Specific version number(s) or ALL
Latest available
You can provide multiple plugin_versions to install several versions at once.
Examples:
List available runtimes and versions:
degirum install-runtime --listInstall the latest ONNX runtime:
degirum install-runtime onnxInstall a specific version of OpenVINO runtime:
degirum install-runtime openvino 2025.3.0Server Control
server
Use this command to start the AI server, shut it down, or ask it to rescan its local model zoo.
You can control only an AI server running on the same host as this command. Remote control is disabled for security reasons.
This command has the following subcommands, which are passed just after the command:
start
Start AI server
rescan-zoo
Request AI server to rescan its model zoo
shutdown
Request AI server to shutdown
cache-dump
Dump AI server inference agent cache info
The command has the following parameters:
--zoo
start
Local model zoo directory to serve models from (applicable only to start subcommand)
Any valid path
Current directory
--quiet
start
Do not display any output (applicable only to start subcommand)
N/A
Disabled
--port
start
TCP port to bind AI server to
1...65535
8778
--protocol
start
AI server protocol to use
asio, http, both
asio
Starting with PySDK 0.10.0, the AI server supports two protocols: asio and http. asio is DeGirum's custom socket-based protocol used in earlier versions. The new http protocol relies on REST HTTP and WebSockets, so you can use the AI server from any language that supports these standards. Browser-based JavaScript, for example, requires the http protocol because it lacks native socket support.
The asio protocol is selected by default. Use --protocol http to enable the http protocol or --protocol both to enable both. When both are enabled, the AI server listens on two consecutive ports: the first for asio and the second for http.
Examples:
Start AI server to serve models from ./my-zoo directory, bind it to default port, and use asio protocol:
degirum server start --zoo ./my-zooStart AI server to serve models from ./my-zoo directory, use asio protocol on port 12345, and use http protocol on port 12346:
degirum server start --zoo ./my-zoo --port 12345 --protocol bothSystem Info
Command: sys-info
This command displays system information for the local host or a remote AI server.
The command has the following parameters:
--host
Remote AI server hostname or IP address; omit to query local system
Valid hostname, IP address, or empty
Empty
Example:
Query system info from remote AI server at IP address 192.168.0.101:
degirum sys-info --host 192.168.0.101Manage AI Hub Tokens
Command: token
Use this command to manage AI Hub access tokens. When a token is managed by this command, then that token will be automatically provided to any function call that uses a token. This way, you will not need to assign token to anything in your PySDK code.
Tokens are stored in a JSON file in your user data directory (
%APPDATA%\\DeGirumon Windows or~/.local/share/DeGirumon Linux and macOS). When a token is installed, PySDK uses it automatically.
This command has the following subcommands, which are passed just after the command:
status
Show information about the installed token
install
Save a provided token to local storage
clear
Remove the installed token from storage; does not forcibly expire the key
The command has the following parameters:
--token
install
Token string to install
Valid token obtained at hub.degirum.com
Empty
--cloud_url
All subcommands
Cloud server URL to operate with
https://hub.degirum.com or custom
https://hub.degirum.com
--local
All subcommands
Do not contact the cloud
N/A
Disabled
Examples:
Install an existing token:
degirum token install --token <your_token>Example output:
Token is successfully installed on your systemCheck the status of the installed token:
degirum token statusExample output:
token: dg_***************************************
$schema: https://hub.degirum.com/schemas/GetTokenInfoOutputBody.json
created_at: 'YYYY-MM-DDTHH:MM:SSZ'
description: Demo token
value: dg_***************************************
expiration: '0001-01-01T00:00:00Z'
user: <user>
space: <workspace>Clear the currently installed token:
degirum token clearExample output:
Token is successfully cleared from your systemManage Tracing
Command: trace
Use this command to manage the AI server tracing feature.
The tracing feature is primarily for debugging and profiling. It is mainly intended for DeGirum customer support and isn't typically used directly by end users.
This command has the following subcommands, which are passed just after the command:
list
List all available trace groups
configure
Configure trace levels for trace groups
read
Read trace data to file
The command has the following parameters:
--host
All subcommands
Remote AI server hostname or IP address
Valid hostname or IP address
localhost
--file
read
Filename to save trace data into; omit to print to console
Valid local filename
Empty
--filesize
read
Maximum trace data size to read
Any integer number
10000000
--basic
configure
Set Basic trace level for a given list of trace groups
One or multiple trace group names as returned by list sub-command
Empty
--detailed
configure
Set Detailed trace level for a given list of trace groups
One or multiple trace group names as returned by list sub-command
Empty
--full
configure
Set Full trace level for a given list of trace groups
One or multiple trace group names as returned by list sub-command
Empty
Examples:
Query AI server at 192.168.0.101 address for the list of available trace groups and print it to console:
degirum trace list --host 192.168.0.101Configure tracing for the AI server on localhost by setting trace levels for specific groups:
degirum trace configure --basic CoreTaskServer --detailed OrcaDMA OrcaRPC --full CoreRuntimeRead trace data from the AI server on localhost and save it to ./my-trace-1.txt
degirum trace read --file ./my-trace-1.txtLast updated
Was this helpful?

