Inference with local models
Learn how to run inference using locally stored models on a DeGirum AI Server, whether the server runs on the same machine as the client or remotely over the network.
Client and AI Server on the same host
ZOO="$HOME/degirum_model_zoo"
mkdir -p "$ZOO"
degirum download-zoo \
--path "$ZOO" \
--url https://hub.degirum.com/degirum/hailo \
--model_family yolov8n_coco--640x640_quant_hailort_multidevice_1degirum server --zoo "$ZOO"DeGirum asio server is started at TCP port 8778
Local model zoo is served from '/home/degirum/degirum_model_zoo' directory.
Press Enter to stop the serverExample ModelSpec
Client and AI Server on different hosts
Example ModelSpec
Last updated
Was this helpful?

