Go to file
openharmony_ci f20e761045
!228 nnrt 告警修改 ut & file realpath
Merge pull request !228 from wangyifan/master
2024-09-29 08:18:03 +00:00
common change log 2024-02-29 10:06:06 +08:00
config modify path 2024-05-13 18:59:18 +08:00
example 不符合要求字样修改 2024-03-15 14:25:37 +08:00
figures 修改README文档,更新支持OH 4.1。 2024-02-02 17:51:24 +08:00
frameworks/native nnrt 告警修改 ut & file realpath 2024-09-27 13:58:34 +08:00
interfaces fix capi 2024-08-22 09:37:46 +08:00
test nnrt 告警修改 ut & file realpath 2024-09-27 13:54:12 +08:00
.gitignore #I9UNOV:UT bug修复 2024-06-04 16:08:53 +08:00
BUILD.gn bugfix 2024-05-13 14:05:00 +08:00
bundle.json 添加nnrt getdevice 2024-08-15 16:34:41 +08:00
LICENSE !1 Add Neural Network Runtime code 2022-10-28 02:32:29 +00:00
neural-network-runtime-guidelines.md 修改README文档,更新支持OH 4.1。 2024-02-02 17:51:24 +08:00
OAT.xml add png filter to oat.xml 2024-05-27 10:09:45 +08:00
README_zh.md 修改README文档中失效的链接地址 2024-09-03 14:52:34 +08:00
README.md 修改README文档中失效的链接地址 2024-09-03 14:52:34 +08:00

Neural Network Runtime

Introduction

Neural Network Runtime (NNRt) functions as a bridge to connect the upper-layer AI inference framework and bottom-layer acceleration chip, implementing cross-chip inference computing of AI models.

As shown in Figure 1, NNRt opens Native APIs for the AI inference framework to access. Currently, NNRt interconnects with the built-in MindSpore Lite inference framework of the system. In addition, NNRt opens HDI APIs for device-side AI acceleration chips (such as NPUs and DSPs) to access the OpenHarmony hardware ecosystem. AI applications can directly use underlying chips to accelerate inference and computing through the AI inference framework and NNRt.

NNRt and MindSpore Lite use MindIR unified intermediate representation to reduce unnecessary model conversion in the intermediate process, making model transfer more efficient.

Generally, the AI application, AI inference engine, and NNRt are in the same process, and the chip driver runs in another process. The transmission of models and computing data between the two processes should be implemented by IPC. NNRt architecture implements the HDI client based on the HDI APIs. Accordingly, chip vendors need to implement and open the HDI services through HDI APIs.

Figure 1 NNRt architecture "NNRt architecture"

Directory Structure

/foundation/ai/neural_network_runtime
├── common                         # Common functions
├── figures                        # Images referenced by README
├── example                        # Development samples
│   ├── deep_learning_framework    # Application/Inference framework development samples
│   └── drivers                    # Device driver development samples
├── frameworks
│   └── native                     # Framework code
│       └── ops                    # Operator header files and implementation
├── interfaces                     # APIs
│   ├── innerkits                  # Internal APIs
│   └── kits                       # External APIs
└── test                           # Test cases
    ├── system_test                # System test cases
    └── unittest                   #  Unit test cases

Compilation and Building

In the root directory of the OpenHarmony source code, call the following command to compile NNRt separately:

./build.sh --product-name rk3568 --ccache --build-target neural_network_runtime --jobs 4

Note: --product-name: product name, for example, Hi3516DV300 and rk3568. --ccache: The cache function is used during compilation. --build-target: name of the compiled component. --jobs: number of compilation threads, which can accelerate compilation.

Description

API Description

How to Use

  • For details about AI inference engine/application development, see Neural Network Runtime App Development Guide.
  • For details about how to develop AI acceleration chip drivers and devices, see Neural Network Runtime Device Development Guide.

Repositories Involved