AI Server Configuration
DeGirum AI Server
The DeGirum AI server software stack allows you to run AI model inferences initiated from multiple remote clients within your local network. The DeGirum AI server software stack can be installed on hosts equipped with AI accelerator cards.
The following table lists operating systems, CPU architectures, and AI hardware accelerators supported by the DeGirum AI server software stack:
Operating System | CPU Architecture | Supported AI Hardware |
---|---|---|
Ubuntu Linux 20.04, 22.04 | x86-64 | DeGirum Orca, Google EdgeTPU, Intel® CPUs/GPUs/NPUs, Rockchip (RK3588, RK3568, RK3566), NVIDIA GPUs & SoCs |
Ubuntu Linux 20.04, 22.04 | ARM AArch64 | DeGirum Orca |
Raspberry Pi OS (64 bit) | ARM AArch64 | DeGirum Orca |
Windows 10 | x86-64 | DeGirum Orca (Planned) |
macOS 12 | x86-64 | DeGirum Orca (Planned) |
macOS 12 | ARM AArch64 | DeGirum Orca (Planned) |
Running DeGirum AI Server
You have the following three options for running the DeGirum AI server:
- From the terminal directly on a Linux host: See Starting AI Server from Terminal.
- As a Linux service: See Starting AI Server as Linux Service.
- As a pre-built Docker container: See Starting AI Server as Docker Container.
Note
Before starting the AI server, ensure the device driver for the AI accelerator is installed on the system. For Orca driver installation, see the Orca Driver page.
Starting AI Server from Terminal
To run the PySDK AI server from the terminal, perform the following steps:
-
Create or select a user name: Choose a user with administrative rights on the host. This guide uses the username
ai-user
, but you can substitute it with any username of your choice. -
Set up a Python virtual environment: For convenience and future maintenance, install PySDK in a Python virtual environment, such as Miniconda. Ensure Python 3.8 and PySDK are installed in the virtual environment.
-
Create a directory for the local model zoo:
-
Download models to the local model zoo: If you want to host models locally (as opposed to using the public model zoo), download the models from the DeGirum AI Hub Model Zoo to the directory created earlier:
"token string"
: Your cloud API access token from the DeGirum AI Hub Portal.- Optional
"cloud zoo URL"
: The URL for the model zoo in the format"https://hub.degirum.com/<organization>/<zoo>"
. If omitted, the public model zoo is used.
-
Start the AI server: Launch the server with the following command:
The server runs until you press ENTER
in the terminal. By default, it listens on TCP port 8778. To specify a different port, use the --port
argument:
Starting AI Server as Linux Service
To automate the server launch so it starts on system boot, configure it as a Linux service:
-
Complete the terminal setup steps: Follow all steps in Starting AI Server from Terminal except for launching the server.
-
Create a systemd service configuration file: Create a file named
degirum.service
in the/etc/systemd/system
directory. Use the following template: -
Start the service:
-
Check the service status:
If the status is "Active," the service is running. -
Enable the service on startup:
Starting AI Server as Docker Container
To run the AI server as a Docker container, follow these steps:
-
Ensure Docker is installed: Refer to the official Docker documentation for installation instructions.
-
Prepare the local model zoo: If hosting models locally, create and populate the model zoo directory as described in Starting AI Server from Terminal.
-
Run the Docker container:
-
For hosting models locally:
Replacedocker run --name aiserver -d -p 8778:8778 -v /home/ai-user/zoo:/zoo --privileged degirum/aiserver:latest
/home/ai-user/zoo
with your local model zoo path. -
For AI Hub-only hosting: