# AI Server

*Estimated read time: 4 minutes*

You should use the DeGirum AI server when you want to:

* Use models from the DeGirum AI Hub while running inference on local Axelera accelerators.
* Serve models from on-device storage for fully offline or LAN-hosted operation.

Continue to [Inference with cloud models](https://docs.degirum.com/axelera/advanced-guides/ai-server/ai-server-inference-with-cloud-models) or [Inference with local models](https://docs.degirum.com/axelera/advanced-guides/ai-server/ai-server-inference-with-local-models) to learn more about using a DeGirum AI server.
