Orca
DeGirumĀ® Orca is a flexible, efficient, and affordable AI accelerator IC. Orca provides application developers the ability to create rich, sophisticated, and highly functional products at the power and price suitable for the edge.
High Performance
Orca's efficient architecture translates to real world application performance. Applications running on multiple input streams and requiring multiple ML models can be enabled with a single Orca. See Orca Performance Benchmarks.
Support for Pruned Models
Orca's ability to process pruned models essentially multiplies the compute and bandwidth resources, allowing the processing of larger, more accurate models to enable real-time cloud-like quality applications at the edge.
Dedicated DRAM
Support for DRAM enables applications to easily switch between different ML models without the need for time-consuming data transfers from the host, thereby reducing model switching penalty and increasing performance. This feature is particularly valuable for applications that require frequent model changes, such as image or speech recognition, where different models may be needed to handle varying data sets or specific tasks.
Flexible Architecture
Orca's flexible architecture enables support for both int8 and float32 precision formats. This allows our customers to choose the best format for their specific use case, allowing them to optimize performance, accuracy, and power consumption based on their unique requirements.