Model Compiler
Port and optimize your custom AI models for various hardware platforms using DeGirum’s Model Compiler.
The AI Hub features a largely-automated model compiler for exporting AI models.
Upload your PyTorch checkpoints and let the compiler do the conversion. Developers may adjust parameters to optimize performance, select target runtimes and devices, and upload a custom dataset for quantization to suit specific deployment requirements.
When a model is compiled, it can be loaded with PySDK or run directly in the browser.
Using the Model Compiler
Upload a Checkpoint File
Click Upload File. You'll be asked to upload a PyTorch checkpoint in .pt format. The model compiler will use this checkpoint to compile a new model.
Fill out details
After uploading your PyTorch checkpoint, enter your model’s details (e.g., name prefix, version, image width, and image height) to identify it in the model zoo.
Select a Runtime and Device
In the dropdown menus after filling out model details, select target runtime, device, and type (e.g. quantize). Your model will be compiled for the target with the selected types.
Select a Model Zoo
After selecting targets, select which model zoo your model will be published to after compilation.
Select Advanced Options (Optional)
Each runtime and device has a set of advanced options. These options include but are not limited to choosing a calibration dataset for quantization, setting the NMS threshold, selecting an optimization level, and defining max classes. Optimization levels can be lazy, normal, or hard. Lazy optimization level will result in a rapid compile time but reduced performance relative to normal and hard optimization levels.
Start Model Compilation
After filling out and selecting all necessary information in the previous steps, click Compile. The compilation will be added as a task in the task list. When the compilation process is done, your model will be published to the selected model zoo.
Last updated
Was this helpful?