# Guides

- [Architecture and Connection Modes](/degirumjs/guides/connection-modes.md): DeGirumJS offers flexible connection modes to cater to various AI inference needs, whether you're running models locally on an AI Server, entirely in the cloud, or a hybrid approach.
- [Batch Processing and Callbacks](/degirumjs/guides/batch-inference.md): This guide covers model.predict\_batch(), asynchronous callbacks, and how to manage the inference queue.
- [Device Management for Inference](/degirumjs/guides/device-management.md): Configure and switch between device types when running inference with DeGirumJS.
- [Model Parameters](/degirumjs/guides/model-parameters.md): Overview of model parameters available when loading or configuring models.
- [Performance and Timing Statistics](/degirumjs/guides/timing.md): Interpret performance and latency metrics collected during inference.
- [Preprocessing and Visual Overlays](/degirumjs/guides/pre-post-processing.md): Customize preprocessing and drawing parameters for DeGirumJS models.
- [Result Object Structure](/degirumjs/guides/result-object-structure.md): Understand the structure of prediction results returned by DeGirumJS.
- [WebCodecs Example](/degirumjs/guides/web-codecs-example.md): Examples for using predict\_batch with the WebCodecs API.
- [Working with Input and Output Data](/degirumjs/guides/input-output-data.md): Guide for the various input data formats supported by DeGirumJS for inference, as well as a detailed breakdown of the output result object structures for different model types.
