Browser Inference

Leverage DeGirum’s Browser Inference to run models directly in your web browser – upload inputs, view real-time results, and explore model details with ease.

As soon as you log into AI Hub, you can perform inferences on various hardware configurations and models, enabling rapid experimentation without any setup required.

The Browser Inference GUI

In the Browser Inference GUI, you can upload an input file, run inference on it, view the source code, examine the model JSON, check labels, read the Model Readme, and more.

When you view a specific model in AI Hub, a GUI appears for running inference directly in your browser.

Using the Browser Inference

1

Upload an Input Image

Click the Input File button on the left side to upload your image. After you select an image, it appears on the left side of the screen.

2

Running Inference

After uploading your image, click Run Inference on the right side to process the input.

3

Results Display

On the right side, the results of the inference will be overlayed onto the original image. In this example, inference completed in 20.57 milliseconds, and two birds were detected with confidence scores of 0.89 and 0.78.

This example uses a Rockchip RK3588 with an RKNN runtime to to identify objects.

Last updated

Was this helpful?