LogoLogo
AI HubCommunityWebsite
  • Start Here
  • AI Hub
    • Overview
    • Quickstart
    • Teams
    • Device Farm
    • Browser Inference
    • Model Zoo
      • Hailo
      • Intel
      • MemryX
      • BrainChip
      • Google
      • DeGirum
      • Rockchip
    • View and Create Model Zoos
    • Model Compiler
    • PySDK Integration
  • PySDK
    • Overview
    • Quickstart
    • Installation
    • Runtimes and Drivers
      • Hailo
      • OpenVINO
      • MemryX
      • BrainChip
      • Rockchip
      • ONNX
    • PySDK User Guide
      • Core Concepts
      • Organizing Models
      • Setting Up an AI Server
      • Loading an AI Model
      • Running AI Model Inference
      • Model JSON Structure
      • Command Line Interface
      • API Reference Guide
        • PySDK Package
        • Model Module
        • Zoo Manager Module
        • Postprocessor Module
        • AI Server Module
        • Miscellaneous Modules
      • Older PySDK User Guides
        • PySDK 0.16.0
        • PySDK 0.15.2
        • PySDK 0.15.1
        • PySDK 0.15.0
        • PySDK 0.14.3
        • PySDK 0.14.2
        • PySDK 0.14.1
        • PySDK 0.14.0
        • PySDK 0.13.4
        • PySDK 0.13.3
        • PySDK 0.13.2
        • PySDK 0.13.1
        • PySDK 0.13.0
    • Release Notes
      • Retired Versions
    • EULA
  • DeGirum Tools
    • Overview
      • Streams
        • Streams Base
        • Streams Gizmos
      • Compound Models
      • Result Analyzer Base
      • Inference Support
  • DeGirumJS
    • Overview
    • Get Started
    • Understanding Results
    • Release Notes
    • API Reference Guides
      • DeGirumJS 0.1.3
      • DeGirumJS 0.1.2
      • DeGirumJS 0.1.1
      • DeGirumJS 0.1.0
      • DeGirumJS 0.0.9
      • DeGirumJS 0.0.8
      • DeGirumJS 0.0.7
      • DeGirumJS 0.0.6
      • DeGirumJS 0.0.5
      • DeGirumJS 0.0.4
      • DeGirumJS 0.0.3
      • DeGirumJS 0.0.2
      • DeGirumJS 0.0.1
  • Orca
    • Overview
    • Benchmarks
    • Unboxing and Installation
    • M.2 Setup
    • USB Setup
    • Thermal Management
    • Tools
  • Resources
    • External Links
Powered by GitBook

Get Started

  • AI Hub Quickstart
  • PySDK Quickstart
  • PySDK in Colab

Resources

  • AI Hub
  • Community
  • DeGirum Website

Social

  • LinkedIn
  • YouTube

Legal

  • PySDK EULA
  • Terms of Service
  • Privacy Policy

Ā© 2025 DeGirum Corp.

On this page
  • High Performance
  • Support for Pruned Models
  • Dedicated DRAM
  • Flexible Architecture

Was this helpful?

  1. Orca

Overview

This page provides an overview of the DeGirum Orca AI accelerator, describing its performance characteristics, support for pruned models, dedicated DRAM feature, and its flexible architecture.

PreviousAPI Reference GuidesNextBenchmarks

Last updated 2 months ago

Was this helpful?

DeGirumĀ® Orca is a flexible, efficient, and affordable AI accelerator IC. Orca provides application developers the ability to create rich, sophisticated, and highly functional products at the power and price suitable for the edge.

High Performance

Orca's efficient architecture translates to real world application performance. Applications running on multiple input streams and requiring multiple ML models can be enabled with a single Orca. See .

Support for Pruned Models

Orca's ability to process pruned models essentially multiplies the compute and bandwidth resources, allowing the processing of larger, more accurate models to enable real-time cloud-like quality applications at the edge.

Dedicated DRAM

Support for DRAM enables applications to easily switch between different ML models without the need for time-consuming data transfers from the host, thereby reducing model switching penalty and increasing performance. This feature is particularly valuable for applications that require frequent model changes, such as image or speech recognition, where different models may be needed to handle varying data sets or specific tasks.

Flexible Architecture

Orca's flexible architecture enables support for both int8 and float32 precision formats. This allows our customers to choose the best format for their specific use case, allowing them to optimize performance, accuracy, and power consumption based on their unique requirements.

Orca Performance Benchmarks
1MB
Orca AI Hardware Accelerator ASIC Flyer.pdf
pdf
Orca AI Accelerator Flyer