Browser Inference

Leverage DeGirum’s Browser Inference to run models directly in your web browser – upload inputs, view real-time results, and explore model details with ease.

As soon as you log into AI Hub, you can perform inferences on various hardware configurations and models, enabling rapid experimentation without any setup required.

The Browser Inference GUI

In the Browser Inference GUI, you can upload an input file, run inference on it, view the source code, examine the model JSON, check labels, read the Model Readme, and more.

When you view a specific model in AI Hub, a GUI appears for running inference directly in your browser.

Using the Browser Inference

1

Upload an Input Image

Click the Input File button on the left side to upload your image. After you select an image, it appears on the left side of the screen.

2

Running Inference

After uploading your image, click Run Inference on the right side to process the input.

3

Results Display

On the right side, the car is highlighted with a yellow bounding box and a confidence score of 0.92. In this example, the inference completes in 14.51 milliseconds.

This example uses a DeGirum Orca1 with an N2X runtime to confirm that the detected object is a car.

Last updated

Was this helpful?