Model Compiler
Port and optimize your custom AI models for various hardware platforms using DeGirum’s Model Compiler.
AI Hub Model Compiler
The AI Hub features a largely-automated model compiler for exporting AI models to many device types.
Simply upload your PyTorch checkpoints and let the compiler do the conversion. However, developers may adjust parameters to optimize performance, select target runtimes and devices, and upload a custom dataset for quantization to suit specific deployment requirements.
When a model is compiled, it can be loaded with PySDK or run directly in the browser.
Using the Model Compiler
Select YOLO Version
Choose a YOLO Version. Your model will be compiled for the YOLO version you select in the dropdown menu.
Upload a Checkpoint File
Click Upload File. You'll be asked to upload a PyTorch checkpoint in .pt format. The model compiler will use this checkpoint to compile a new model.
Fill out details
After uploading your PyTorch checkpoint, enter your model’s details (e.g., name prefix, version, image width, and image height) to identify it in the model zoo.
Select a Runtime and Device
In the dropdown menus after filling out model details, select target runtime and device. Your model will be compiled for the target.
Select a Model Zoo
After selecting target runtimes and devices, select which model zoo your model will be published to after compilation.
Select Advanced Options (Optional)
Each runtime and device has a set of advanced options. These options include but are not limited to choosing a calibration dataset for quantization, setting the NMS threshold, selecting an optimization level, and defining max classes.
Start Model Compilation
After filling out and selecting all necessary information in the previous steps, click Compile. When the compilation process is done, your model will be published to the selected model zoo.
Last updated
Was this helpful?