LogoLogo
AI HubCommunityWebsite
  • Start Here
  • AI Hub
    • Overview
    • Quickstart
    • Teams
    • Device Farm
    • Browser Inference
    • Model Zoo
      • Hailo
      • Intel
      • MemryX
      • BrainChip
      • Google
      • DeGirum
      • Rockchip
    • View and Create Model Zoos
    • Model Compiler
    • PySDK Integration
  • PySDK
    • Overview
    • Quickstart
    • Installation
    • Runtimes and Drivers
      • Hailo
      • OpenVINO
      • MemryX
      • BrainChip
      • Rockchip
      • ONNX
    • PySDK User Guide
      • Core Concepts
      • Organizing Models
      • Setting Up an AI Server
      • Loading an AI Model
      • Running AI Model Inference
      • Model JSON Structure
      • Command Line Interface
      • API Reference Guide
        • PySDK Package
        • Model Module
        • Zoo Manager Module
        • Postprocessor Module
        • AI Server Module
        • Miscellaneous Modules
      • Older PySDK User Guides
        • PySDK 0.16.0
        • PySDK 0.15.2
        • PySDK 0.15.1
        • PySDK 0.15.0
        • PySDK 0.14.3
        • PySDK 0.14.2
        • PySDK 0.14.1
        • PySDK 0.14.0
        • PySDK 0.13.4
        • PySDK 0.13.3
        • PySDK 0.13.2
        • PySDK 0.13.1
        • PySDK 0.13.0
    • Release Notes
      • Retired Versions
    • EULA
  • DeGirum Tools
    • Overview
      • Streams
        • Streams Base
        • Streams Gizmos
      • Compound Models
      • Result Analyzer Base
      • Inference Support
  • DeGirumJS
    • Overview
    • Get Started
    • Understanding Results
    • Release Notes
    • API Reference Guides
      • DeGirumJS 0.1.3
      • DeGirumJS 0.1.2
      • DeGirumJS 0.1.1
      • DeGirumJS 0.1.0
      • DeGirumJS 0.0.9
      • DeGirumJS 0.0.8
      • DeGirumJS 0.0.7
      • DeGirumJS 0.0.6
      • DeGirumJS 0.0.5
      • DeGirumJS 0.0.4
      • DeGirumJS 0.0.3
      • DeGirumJS 0.0.2
      • DeGirumJS 0.0.1
  • Orca
    • Overview
    • Benchmarks
    • Unboxing and Installation
    • M.2 Setup
    • USB Setup
    • Thermal Management
    • Tools
  • Resources
    • External Links
Powered by GitBook

Get Started

  • AI Hub Quickstart
  • PySDK Quickstart
  • PySDK in Colab

Resources

  • AI Hub
  • Community
  • DeGirum Website

Social

  • LinkedIn
  • YouTube

Legal

  • PySDK EULA
  • Terms of Service
  • Privacy Policy

© 2025 DeGirum Corp.

On this page
  • ONNX Runtime in PySDK
  • AMD Ryzen™ AI in PySDK

Was this helpful?

  1. PySDK
  2. Runtimes and Drivers

ONNX

This page provides step-by-step instructions for installing the ONNX runtime in PySDK.

PreviousRockchipNextPySDK User Guide

Last updated 2 months ago

Was this helpful?

ONNX Runtime in PySDK

PySDK supports version 1.19.0 of ONNX Runtime, on Linux, Windows, and Mac.

To install the ONNX runtime, download an 1.19.0 archive for your system . Do not use the "-training-" archives. Then, extract it into an appropriate directory:

  • On Windows, extract the archive to C:\Program Files, C:\Program Files (x86), or C:\ProgramData.

  • On Linux, extract the archive to /usr/local/.

  • On Mac, extract the archive to /usr/local/.

Do not rename the onnxruntime-<os>-<architecture>-1.19.0 directory after you extract it.

AMD Ryzen™ AI in PySDK

PySDK supports AMD Ryzen™ AI version 1.2.

To install AMD Ryzen™ AI, click to read installation instructions.

The installation includes: (1) an NPU driver and (2) a "RyzenAI Software MSI installer."

  • Make sure to launch the driver npu_sw_installer.exe from the terminal while in admin mode.

  • Don't forget to append your C:\path\to\miniconda3\Scripts to the Path System environment variable before launching the installer.

  • Restart the terminal (if using VS Code, restart all open windows) after installation.

Run conda activate <ryzen-ai-env-name> to enter the environment created by the installation wizard.

You may optionally set set Ryzen™ AI environment variables. PySDK initializes the defaults for your processor type automatically. For more information, see the notes below.

At this point, you should be ready to run inference on models prepared for the Ryzen NPU from the DeGirum AI Hub Model Zoo.

About environment variables

Ryzen™ AI requires certain environment variables to be set depending on the processor configuration:

  • Phoenix (PHX): AMD Ryzen™ 7940HS, 7840HS, 7640HS, 7840U, 7640U.

  • Hawk (HPT): AMD Ryzen™ 8640U, 8640HS, 8645H, 8840U, 8840HS, 8845H, 8945H.

  • Strix (STX): AMD Ryzen™ Ryzen AI 9 HX370, Ryzen AI 9 365.

About your processor

To find your processor configuration, go to Settings -> System -> About.

If you want to manually specify the environment variables, then either in Windows CMD, Powershell, or inside a Python script run the following:

  • If your processor is Phoenix or Hawk (PHX/HPT):

    • CMD:

    set XLNX_VART_FIRMWARE=%RYZEN_AI_INSTALLATION_PATH%voe-4.0-win_amd64/xclbins/phoenix/1x4.xclbin
    set XLNX_TARGET_NAME=AMD_AIE2_Nx4_Overlay
    • Powershell:

    $env:XLNX_VART_FIRMWARE="$env:RYZEN_AI_INSTALLATION_PATH"+"voe-4.0-win_amd64/xclbins/phoenix/1x4.xclbin"
    $env:XLNX_TARGET_NAME="AMD_AIE2_Nx4_Overlay"
    • Python:

    import os
    os.environ['XLNX_VART_FIRMWARE'] = os.environ['RYZEN_AI_INSTALLATION_PATH'] + 'voe-4.0-win_amd64/xclbins/phoenix/1x4.xclbin'
    os.environ['XLNX_TARGET_NAME'] = "AMD_AIE2_Nx4_Overlay"
  • If your processor is Strix (STX):

    • CMD:

    set XLNX_VART_FIRMWARE=%RYZEN_AI_INSTALLATION_PATH%voe-4.0-win_amd64/xclbins/strix/AMD_AIE2P_Nx4_Overlay.xclbin
    set XLNX_TARGET_NAME=AMD_AIE2P_Nx4_Overlay
    • Powershell:

    $env:XLNX_VART_FIRMWARE="$env:RYZEN_AI_INSTALLATION_PATH"+"voe-4.0-win_amd64/xclbins/strix/AMD_AIE2P_Nx4_Overlay.xclbin"
    $env:XLNX_TARGET_NAME="AMD_AIE2P_Nx4_Overlay"
    • Python:

    import os
    os.environ['XLNX_VART_FIRMWARE'] = os.environ['RYZEN_AI_INSTALLATION_PATH'] + 'voe-4.0-win_amd64/xclbins/strix/AMD_AIE2P_Nx4_Overlay.xclbin'
    os.environ['XLNX_TARGET_NAME'] = "AMD_AIE2P_Nx4_Overlay"

Changing environment with Python

When setting environment variables with Python, put the code lines before import degirum for PySDK to see the changes.

These variables have to be set every time you open a fresh terminal to run inference. In case you want to set them permanently, go to Edit the system environment varibles in Windows search and add (or edit) the two variables through the GUI.

Incorrect environment variables may lead to a crash. PySDK detects when the value of env:XLNX_VART_FIRMWARE doesn't match the CPU type and automatically sets it to the one appropriate for your device to prevent a system crash. However, be careful when manually redefining the environment variables.

About NPU Configurations:

Models will be recompiled when run with a different NPU configuration. PySDK detects when an inference on a model is launched in a configuration different from the one it was compiled for (if the cache exists) and recompiles it at runtime to prevent a crash. So expect some initial delay when trying a different NPU configuration.

About supported model format

DeGirum AI Hub Model Zoo supplies INT8 symmetrically quantized models required by the Ryzen™ AI NPU. Models come with precompiled caches to bypass the sometimes lengthy compilation process and enable immediate inference. If recompilation is needed or if cached models cause errors, remove the <model_name>_cache subdirectory from the downloaded model directory. Inference will then proceed with just-in-time compilation automatically.

The instructions above correspond the "Standard Configuration" of the NPU. An additional "Benchmark Configuration" is supported by Ryzen™ AI. The setup process is similar (requires different environment variable values) and is described in detail on the

here
this link
Ryzen™ AI Runtime Setup Page.