Browser Inference
Leverage DeGirum’s Browser Inference to run models directly in your web browser – upload inputs, view real-time results, and explore model details effortlessly.
Last updated
Was this helpful?
Leverage DeGirum’s Browser Inference to run models directly in your web browser – upload inputs, view real-time results, and explore model details effortlessly.
Last updated
Was this helpful?
As soon as you log into AI Hub, you can perform inferences on various hardware configurations and models, enabling rapid experimentation without any setup required.
When you view specific models in the AI Hub, you will be presented with a GUI to run an inference in your browser.
Upload an Input Image
Click the Input File button on the left side to upload your image. When you upload an input file with this image, the left side will show your input image.
Running Inference
After uploading your image, click Run Inference on the right side to run inference on the input image.
Results Display
On the right side, the car is highlighted with a yellow bounding box, and a confidence score (0.92) is displayed. We observe an inference duration of 14.51 milliseconds.
In our example, we use a DeGirum Orca1 with an N2X runtime to verify that the detected object is indeed a car.