Inference with local models
Learn how to run inference using locally stored models on a DeGirum AI Server, whether the server runs on the same machine as the client or remotely over the network.
Estimated read time: 3 minutes
This setup runs inference on local hardware using DeGirum AI Server. The model is stored in a local folder on the machine where the server runs.
You can set the server address via inference_host_address="localhost" or inference_host_address="<host_ip>:<port>".
Client and AI Server on the same host
In this case, both the AI Server and your client application run on the same machine.
First, download the model to a local folder using the degirum download-zoo command:
ZOO="$HOME/degirum_model_zoo"
mkdir -p "$ZOO"
degirum download-zoo \
--path "$ZOO" \
--url https://hub.degirum.com/degirum/hailo \
--model_family yolov8n_coco--640x640_quant_hailort_multidevice_1Then launch the AI Server with the following command:
degirum server --zoo "$ZOO"You should see output like:
DeGirum asio server is started at TCP port 8778
Local model zoo is served from '/home/degirum/degirum_model_zoo' directory.
Press Enter to stop the serverThe server runs until you press ENTER. By default, it listens on TCP port 8778. To change the port, use the --port argument:
degirum server --port <your_port> --zoo "$ZOO"Example ModelSpec
This example configures ModelSpec to use the AI Server and load the model from the local zoo:
# Example ModelSpec
model_spec = ModelSpec(
model_name="yolov8n_coco--640x640_quant_hailort_multidevice_1",
,
,
model_properties={"device_type": ["HAILORT/HAILO8L", "HAILORT/HAILO8"]}
)zoo_url="aiserver://": load the model from the zoo path given when launching the AI Server.inference_host_address="localhost": run inference using the Hailo device managed by the local server.
Client and AI Server on different hosts
To run inference from a separate client machine, first use degirum download-zoo to download the model to the host that will run the AI Server:
ZOO="$HOME/degirum_model_zoo"
mkdir -p "$ZOO"
degirum download-zoo \
--path "$ZOO" \
--url https://hub.degirum.com/degirum/hailo \
--model_family yolov8n_coco--640x640_quant_hailort_multidevice_1Star the AI Server on the remote host:
degirum server --zoo "$ZOO"You should see output like:
DeGirum asio server is started at TCP port 8778
Local model zoo is served from '/home/degirum/degirum_model_zoo' directory.
Press Enter to stop the serverAs before, the server runs until you press ENTER. By default, it listens on TCP port 8778. To change the port, use the --port argument:
degirum server --port <your_port> --zoo "$ZOO"Example ModelSpec
This time, the client points to a remote host:
# Example ModelSpec
model_spec = ModelSpec(
model_name="yolov8n_coco--640x640_quant_hailort_multidevice_1",
,
,
model_properties={"device_type": ["HAILORT/HAILO8L", "HAILORT/HAILO8"]}
)zoo_url="aiserver://": still points to the AI Server zoo.inference_host_address="<host_ip>:<port>": runs inference using the Hailo device on the remote AI Server.
Last updated
Was this helpful?

