Go to file
周翔 0d5bcba8c7 #I9UNOV:UT bug修复
Signed-off-by: 周翔 <zhouxiang78@h-partners.com>
2024-06-04 16:08:53 +08:00
common change log 2024-02-29 10:06:06 +08:00
config modify path 2024-05-13 18:59:18 +08:00
example 不符合要求字样修改 2024-03-15 14:25:37 +08:00
figures 修改README文档,更新支持OH 4.1。 2024-02-02 17:51:24 +08:00
frameworks/native !155 NNRT 1_0/2_0 稳定性问题修复 2024-06-03 04:40:28 +00:00
interfaces update translation and fix unstack bug 2024-05-13 19:36:31 +08:00
test #I9UNOV:UT bug修复 2024-06-04 16:08:53 +08:00
.gitignore #I9UNOV:UT bug修复 2024-06-04 16:08:53 +08:00
BUILD.gn bugfix 2024-05-13 14:05:00 +08:00
bundle.json fix UT config 2024-04-30 10:24:06 +08:00
LICENSE !1 Add Neural Network Runtime code 2022-10-28 02:32:29 +00:00
neural-network-runtime-guidelines.md 修改README文档,更新支持OH 4.1。 2024-02-02 17:51:24 +08:00
OAT.xml add png filter to oat.xml 2024-05-27 10:09:45 +08:00
README_zh.md fix readme 2024-02-18 16:22:59 +08:00
README.md fix readme 2024-02-18 16:21:56 +08:00

Neural Network Runtime

Introduction

Neural Network Runtime (NNRt) functions as a bridge to connect the upper-layer AI inference framework and bottom-layer acceleration chip, implementing cross-chip inference computing of AI models.

As shown in Figure 1, NNRt opens Native APIs for the AI inference framework to access. Currently, NNRt interconnects with the built-in MindSpore Lite inference framework of the system. In addition, NNRt opens HDI APIs for device-side AI acceleration chips (such as NPUs and DSPs) to access the OpenHarmony hardware ecosystem. AI applications can directly use underlying chips to accelerate inference and computing through the AI inference framework and NNRt.

NNRt and MindSpore Lite use MindIR unified intermediate representation to reduce unnecessary model conversion in the intermediate process, making model transfer more efficient.

Generally, the AI application, AI inference engine, and NNRt are in the same process, and the chip driver runs in another process. The transmission of models and computing data between the two processes should be implemented by IPC. NNRt architecture implements the HDI client based on the HDI APIs. Accordingly, chip vendors need to implement and open the HDI services through HDI APIs.

Figure 1 NNRt architecture "NNRt architecture"

Directory Structure

/foundation/ai/neural_network_runtime
├── common                         # Common functions
├── figures                        # Images referenced by README
├── example                        # Development samples
│   ├── deep_learning_framework    # Application/Inference framework development samples
│   └── drivers                    # Device driver development samples
├── frameworks
│   └── native                     # Framework code
│       └── ops                    # Operator header files and implementation
├── interfaces                     # APIs
│   ├── innerkits                  # Internal APIs
│   └── kits                       # External APIs
└── test                           # Test cases
    ├── system_test                # System test cases
    └── unittest                   #  Unit test cases

Compilation and Building

In the root directory of the OpenHarmony source code, call the following command to compile NNRt separately:

./build.sh --product-name rk3568 --ccache --build-target neural_network_runtime --jobs 4

Note: --product-name: product name, for example, Hi3516DV300 and rk3568. --ccache: The cache function is used during compilation. --build-target: name of the compiled component. --jobs: number of compilation threads, which can accelerate compilation.

Description

API Description

How to Use

  • For details about AI inference engine/application development, see Neural Network Runtime App Development Guide.
  • For details about how to develop AI acceleration chip drivers and devices, see Neural Network Runtime Device Development Guide.

Repositories Involved