Aiservermodel
AIServerModel
A comprehensive class for handling AI model inference using an AIServer
over WebSocket. Designed to provide a streamlined interface for sending data to the server for inference, receiving
processed results, and displaying or further processing these results as needed.
Features:
- WebSocket Communication: Handles the full lifecycle of a WebSocket connection for real-time data streaming.
- Preprocessing & Postprocessing: Integrates with PreProcess and PostProcess classes to prepare data for the model and visualize results.
- Queue Management: Uses AsyncQueue instances to manage inbound and outbound data flow.
- Concurrency Control: Ensures thread-safe operations through mutex usage.
- Dynamic Configuration: Allows runtime modification of model and overlay parameters.
- Callback Integration: Supports custom callback functions for handling results outside the class.
Kind: global class
- AIServerModel
- new AIServerModel(options, measureTime, [additionalParams])
- .predict(imageFile, [info], [bypassPreprocessing]) ⇒
Promise.<Object>
- .predict_batch(data_source, [bypassPreprocessing])
- .modelInfo() ⇒
Object
- .labelDictionary() ⇒
Object
- .displayResultToCanvas(combinedResult, outputCanvasName, [justResults])
- .processImageFile(combinedResult) ⇒
Promise.<Blob>
- .cleanup()
- .resetTimeStats()
- .getTimeStats()
new AIServerModel(options, measureTime, [additionalParams])
Do not call the constructor directly. Use the loadModel
method of an AIServerZoo instance to create an AIServerModel.
Param | Type | Default | Description |
---|---|---|---|
options | Object |
Options for initializing the model. | |
options.modelName | string |
The name of the model to load. | |
options.serverUrl | string |
The URL of the server. | |
options.modelParams | Object |
The default model parameters. | |
[options.max_q_len] | number |
10 |
Maximum queue length. |
[options.callback] | function |
|
Callback function for handling results. |
[options.labels] | Object |
|
Label dictionary for the model. |
options.systemDeviceTypes | Array.<string> |
Array of 'RUNTIME/DEVICE' strings supported by the AIServer. | |
measureTime | boolean |
false |
Whether to measure inference and collect other statistics. |
[additionalParams] | Object |
Additional parameters for the model. |
Example (Usage:)
- Create an instance with the required model details and server URL.
let model = zoo.loadModel('some_model_name', {} );
- Use the `predict` method for inference with individual data items or `predict_batch` for multiple items.
let result = await model.predict(someImage);
for await (let result of model.predict_batch(someDataGeneratorFn)) { ... }
- Access processed results directly or set up a callback function for custom result handling.
- You can display results to a canvas to view drawn overlays.
await model.displayResultToCanvas(result, canvas);
aiServerModel.predict(imageFile, [info], [bypassPreprocessing]) ⇒ Promise.<Object>
Predicts the result for a given image.
Kind: instance method of AIServerModel
Returns: Promise.<Object>
- The prediction result.
Param | Type | Default | Description |
---|---|---|---|
imageFile | Blob | File | string | HTMLImageElement | HTMLVideoElement | HTMLCanvasElement | ArrayBuffer | TypedArray | ImageBitmap |
||
[info] | string |
"performance.now()" |
Unique frame information provided by user (such as frame num). Used for matching results back to input images within callback. |
[bypassPreprocessing] | boolean |
false |
Whether to bypass preprocessing. Used to send Blob data directly to the socket without any preprocessing. |
Example
If callback is provided:
The WebSocket onmessage will invoke the callback directly when the result arrives.
If callback is not provided:
The function waits for the resultQ to get a result, then returns it.
let result = await model.predict(someImage);
aiServerModel.predict_batch(data_source, [bypassPreprocessing])
Predicts results for a batch of data. Will yield results if a callback is not provided.
Kind: instance method of AIServerModel
Param | Type | Default | Description |
---|---|---|---|
data_source | AsyncIterable |
An async iterable data source. | |
[bypassPreprocessing] | boolean |
false |
Whether to bypass preprocessing. |
Example
The function asynchronously processes results. If a callback is not provided, it will yield results.
for await (let result of model.predict_batch(data_source)) { console.log(result); }
aiServerModel.modelInfo() ⇒ Object
Returns a read-only copy of the model parameters.
Kind: instance method of AIServerModel
Returns: Object
- The model parameters.
aiServerModel.labelDictionary() ⇒ Object
Returns the label dictionary for this AIServerModel instance.
Kind: instance method of AIServerModel
Returns: Object
- The label dictionary.
aiServerModel.displayResultToCanvas(combinedResult, outputCanvasName, [justResults])
Overlay the result onto the image frame and display it on the canvas.
Kind: instance method of AIServerModel
Param | Type | Default | Description |
---|---|---|---|
combinedResult | Object |
The result object combined with the original image frame. This is directly received from predict or predict_batch |
|
outputCanvasName | string | HTMLCanvasElement |
The canvas to draw the image onto. Either the canvas element or the ID of the canvas element. | |
[justResults] | boolean |
false |
Whether to show only the result overlay without the image frame. |
aiServerModel.processImageFile(combinedResult) ⇒ Promise.<Blob>
Processes the original image and draws the results on it, return png image with overlayed results.
Kind: instance method of AIServerModel
Returns: Promise.<Blob>
- The processed image file as a Blob of a PNG image.
Param | Type | Description |
---|---|---|
combinedResult | Object |
The result object combined with the original image frame. |
aiServerModel.cleanup()
Cleans up resources and closes the WebSocket connection.
Does so by following a destructor-like pattern which is manually called by the user.
Makes sure to close the WebSocket connection, stop all inferences, remove the listeners, clear async queues, and nullify all references.
Call this whenever switching models or when the model instance is no longer needed.
Kind: instance method of AIServerModel
aiServerModel.resetTimeStats()
Resets the stats dict to an empty dict
Kind: instance method of AIServerModel
aiServerModel.getTimeStats()
Returns the stats dict to the user
Kind: instance method of AIServerModel