AI Server

Learn how to use the DeGirum AI Server to efficiently host your hardware with local or cloud models.

Estimated read time: 4 minutes

You should use the DeGirum AI server when you want to:

  • Use models from the DeGirum AI Hub while running inference on local Axelera accelerators.

  • Serve models from on-device storage for fully offline or LAN-hosted operation.

Continue to Inference with cloud models or Inference with local models to learn more about using a DeGirum AI server.

Last updated

Was this helpful?