Inference with cloud models
Run inference on a local AI server while fetching models from DeGirum’s public cloud zoo—ideal for hybrid setups where compute is local, but model access is remote.
Client and AI Server on the same host
degirum serverDeGirum asio server is started at TCP port 8778
Local model zoo is served from '.' directory.
Press Enter to stop the serverdegirum server --port <your_port>Example ModelSpec
# Example ModelSpec
model_spec = ModelSpec(
model_name="yolov8n_coco--640x640_quant_hailort_multidevice_1",
zoo_url="degirum/hailo",
inference_host_address="localhost:8778",
model_properties={"device_type": ["HAILORT/HAILO8L", "HAILORT/HAILO8"]}
)Client and AI Server on different hosts
Example ModelSpec
Last updated
Was this helpful?

