修改README文档,更新支持OH 4.1。

Signed-off-by: weiwei <weiwei17@huawei.com>
This commit is contained in:
weiwei 2024-02-02 17:51:24 +08:00
parent 433ccc5c65
commit d28da2179e
6 changed files with 443 additions and 236 deletions

View File

@ -49,7 +49,7 @@ In the root directory of the OpenHarmony source code, call the following command
### API Description
- [Native API reference](https://gitee.com/openharmony-sig/interface_native_header/tree/master/en/native_sdk/ai)
- [Native API reference](https://gitee.com/openharmony/docs/blob/master/en/application-dev/reference/native-apis/_neural_network_runtime.md)
- [HDI API reference](https://gitee.com/openharmony/drivers_interface/tree/master/nnrt)
### How to Use
@ -59,5 +59,5 @@ In the root directory of the OpenHarmony source code, call the following command
## Repositories Involved
- [**neural_network_runtime**](https://gitee.com/openharmony-sig/neural_network_runtime)
- [**neural_network_runtime**](https://gitee.com/openharmony/neural_network_runtime)
- [third_party_mindspore](https://gitee.com/openharmony/third_party_mindspore)

View File

@ -11,7 +11,7 @@ Neural Network Runtime与MindSpore Lite使用MindIR统一模型的中间表达
通常AI应用、AI推理引擎、Neural Network Runtime处在同一个进程下芯片驱动运行在另一个进程下两者之间需要借助进程间通信IPC传递模型和计算数据。Neural Network Runtime根据HDI接口实现了HDI客户端相应的芯片厂商需要根据HDI接口实现并开放HDI服务。
**图1** Neural Network Runtime架构图
!["Neural Network Runtime架构图"](./figures/neural_network_runtime_intro.png)
!["Neural Network Runtime架构图"](./figures/zh-cn_neural_network_runtime_intro.jpg)
## 目录
@ -49,15 +49,15 @@ Neural Network Runtime与MindSpore Lite使用MindIR统一模型的中间表达
### 接口说明
- Native接口文档请参考[Native接口](https://gitee.com/openharmony/ai_neural_network_runtime/tree/master/interfaces/kits/c)。
- HDI接口文档请参考[HDI接口](https://gitee.com/openharmony/drivers_interface/tree/master/nnrt)。
- Native接口文档请参考[Native接口](https://gitee.com/openharmony/docs/blob/master/zh-cn/application-dev/reference/native-apis/_neural_nework_runtime.md)。
- HDI接口文档请参考[HDI接口](https://gitee.com/openharmony/docs/blob/master/zh-cn/device-dev/reference/hdi-apis/_n_n_rt.md)。
### 使用说明
- AI推理引擎/应用开发请参考:[Neural Network Runtime应用开发指导](./neural-network-runtime-guidelines.md)。
- AI加速芯片驱动/设备开发请参考:[Neural Network Runtime设备开发指导](./example/drivers/README_zh.md)。
- AI推理引擎/应用开发请参考:[Neural Network Runtime对接AI推理框架开发指导](./neural-network-runtime-guidelines.md)。
- AI加速芯片驱动/设备开发请参考:[Neural Network Runtime设备接入指导](./example/drivers/README_zh.md)。
## 相关仓
- [**neural_network_runtime**](https://gitee.com/openharmony-sig/neural_network_runtime)
- [**neural_network_runtime**](https://gitee.com/openharmony/neural_network_runtime)
- [third_party_mindspore](https://gitee.com/openharmony/third_party_mindspore)

View File

@ -1,4 +1,4 @@
# NNRt设备开发指导
# Neural Network Runtime设备接入指导
## 概述

Binary file not shown.

Before

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

View File

@ -15,34 +15,33 @@ Neural Network Runtime作为AI推理引擎和加速芯片的桥梁为AI推理
Neural Network Runtime部件的环境要求如下
- 系统版本OpenHarmony master分支。
- 开发环境Ubuntu 18.04及以上。
- 接入设备:OpenHarmony定义的标准设备并且系统中内置的硬件加速器驱动已通过HDI接口对接Neural Network Runtime。
- 接入设备:系统定义的标准设备系统中内置AI硬件驱动并已接入Neural Network Runtime。
由于Neural Network Runtime通过OpenHarmony Native API对外开放需要通过OpenHarmony的Native开发套件编译Neural Network Runtime应用。在社区的[每日构建](http://ci.openharmony.cn/dailys/dailybuilds)下载对应系统版本的ohos-sdk压缩包从压缩包中提取对应平台的Native开发套件。以Linux为例Native开发套件的压缩包命名为`native-linux-{版本号}.zip`。
由于Neural Network Runtime通过OpenHarmony Native API对外开放需要通过OpenHarmony的Native开发套件编译Neural Network Runtime应用。在社区的每日构建中下载对应系统版本的ohos-sdk压缩包从压缩包中提取对应平台的Native开发套件。以Linux为例Native开发套件的压缩包命名为`native-linux-{版本号}.zip`。
### 环境搭建
1. 打开Ubuntu编译服务器的终端。
2. 把下载好的Native开发套件压缩包拷贝至当前用户根目录下。
3. 执行以下命令解压Native开发套件的压缩包。
```shell
unzip native-linux-{版本号}.zip
```
```shell
unzip native-linux-{版本号}.zip
```
解压缩后的内容如下随版本迭代目录下的内容可能发生变化请以最新版本的Native API为准
```text
native/
├── build // 交叉编译工具链
├── build-tools // 编译构建工具
├── docs
├── llvm
├── nativeapi_syscap_config.json
├── ndk_system_capability.json
├── NOTICE.txt
├── oh-uni-package.json
└── sysroot // Native API头文件和库
```
解压缩后的内容如下随版本迭代目录下的内容可能发生变化请以最新版本的Native API为准
```text
native/
├── build // 交叉编译工具链
├── build-tools // 编译构建工具
├── docs
├── llvm
├── nativeapi_syscap_config.json
├── ndk_system_capability.json
├── NOTICE.txt
├── oh-uni-package.json
└── sysroot // Native API头文件和库
```
## 接口说明
这里给出Neural Network Runtime开发流程中通用的接口具体请见下列表格。
@ -54,44 +53,97 @@ native/
| typedef struct OH_NNModel OH_NNModel | Neural Network Runtime的模型句柄用于构造模型。 |
| typedef struct OH_NNCompilation OH_NNCompilation | Neural Network Runtime的编译器句柄用于编译AI模型。 |
| typedef struct OH_NNExecutor OH_NNExecutor | Neural Network Runtime的执行器句柄用于在指定设备上执行推理计算。 |
| typedef struct NN_QuantParam NN_QuantParam | Neural Network Runtime的量化参数句柄用于在构造模型时指定张量的量化参数。 |
| typedef struct NN_TensorDesc NN_TensorDesc | Neural Network Runtime的张量描述句柄用于描述张量的各类属性例如数据布局、数据类型、形状等。 |
| typedef struct NN_Tensor NN_Tensor | Neural Network Runtime的张量句柄用于设置执行器的推理输入和输出张量。 |
### 模型构造相关接口
### 模型构造接口
| 接口名称 | 描述 |
| ------- | --- |
| OH_NNModel_Construct() | 创建OH_NNModel类型的模型实例。 |
| OH_NN_ReturnCode OH_NNModel_AddTensor(OH_NNModel *model, const OH_NN_Tensor *tensor) | 向模型实例中添加张量。 |
| OH_NN_ReturnCode OH_NNModel_AddTensorToModel(OH_NNModel *model, const NN_TensorDesc *tensorDesc) | 向模型实例中添加张量。 |
| OH_NN_ReturnCode OH_NNModel_SetTensorData(OH_NNModel *model, uint32_t index, const void *dataBuffer, size_t length) | 设置张量的数值。 |
| OH_NN_ReturnCode OH_NNModel_AddOperation(OH_NNModel *model, OH_NN_OperationType op, const OH_NN_UInt32Array *paramIndices, const OH_NN_UInt32Array *inputIndices, const OH_NN_UInt32Array *outputIndices) | 向模型实例中添加算子。 |
| OH_NN_ReturnCode OH_NNModel_SpecifyInputsAndOutputs(OH_NNModel *model, const OH_NN_UInt32Array *inputIndices, const OH_NN_UInt32Array *outputIndices) | 指定模型的输入输出。 |
| OH_NN_ReturnCode OH_NNModel_SpecifyInputsAndOutputs(OH_NNModel *model, const OH_NN_UInt32Array *inputIndices, const OH_NN_UInt32Array *outputIndices) | 指定模型的输入输出张量的索引值。 |
| OH_NN_ReturnCode OH_NNModel_Finish(OH_NNModel *model) | 完成模型构图。|
| void OH_NNModel_Destroy(OH_NNModel **model) | 释放模型实例。 |
| void OH_NNModel_Destroy(OH_NNModel **model) | 销毁模型实例。 |
### 模型编译相关接口
### 模型编译接口
| 接口名称 | 描述 |
| ------- | --- |
| OH_NNCompilation *OH_NNCompilation_Construct(const OH_NNModel *model) | 创建OH_NNCompilation类型的编译实例。 |
| OH_NN_ReturnCode OH_NNCompilation_SetDevice(OH_NNCompilation *compilation, size_t deviceID) | 指定模型编译和计算的硬件。 |
| OH_NN_ReturnCode OH_NNCompilation_SetCache(OH_NNCompilation *compilation, const char *cachePath, uint32_t version) | 设置编译后的模型缓存路径和缓存版本。 |
| OH_NN_ReturnCode OH_NNCompilation_Build(OH_NNCompilation *compilation) | 进行模型编译。 |
| void OH_NNCompilation_Destroy(OH_NNCompilation **compilation) | 释放OH_NNCompilation对象。 |
| OH_NNCompilation *OH_NNCompilation_Construct(const OH_NNModel *model) | 基于模型实例创建OH_NNCompilation类型的编译实例。 |
| OH_NNCompilation *OH_NNCompilation_ConstructWithOfflineModelFile(const char *modelPath) | 基于离线模型文件路径创建OH_NNCompilation类型的编译实例。 |
| OH_NNCompilation *OH_NNCompilation_ConstructWithOfflineModelBuffer(const void *modelBuffer, size_t modelSize) | 基于离线模型文件内存创建OH_NNCompilation类型的编译实例。 |
| OH_NNCompilation *OH_NNCompilation_ConstructForCache() | 创建一个空的编译实例,以便稍后从模型缓存中恢复。 |
| OH_NN_ReturnCode OH_NNCompilation_ExportCacheToBuffer(OH_NNCompilation *compilation, const void *buffer, size_t length, size_t *modelSize) | 将模型缓存写入到指定内存区域。 |
| OH_NN_ReturnCode OH_NNCompilation_ImportCacheFromBuffer(OH_NNCompilation *compilation, const void *buffer, size_t modelSize) | 从指定内存区域读取模型缓存。 |
| OH_NN_ReturnCode OH_NNCompilation_AddExtensionConfig(OH_NNCompilation *compilation, const char *configName, const void *configValue, const size_t configValueSize) | 为自定义硬件属性添加扩展配置,具体硬件的扩展属性名称和属性值需要从硬件厂商的文档中获取。 |
| OH_NN_ReturnCode OH_NNCompilation_SetDevice(OH_NNCompilation *compilation, size_t deviceID) | 指定模型编译和计算的硬件,可通过设备管理接口获取。 |
| OH_NN_ReturnCode OH_NNCompilation_SetCache(OH_NNCompilation *compilation, const char *cachePath, uint32_t version) | 设置编译模型的缓存目录和版本。 |
| OH_NN_ReturnCode OH_NNCompilation_SetPerformanceMode(OH_NNCompilation *compilation, OH_NN_PerformanceMode performanceMode) | 设置模型计算的性能模式。 |
| OH_NN_ReturnCode OH_NNCompilation_SetPriority(OH_NNCompilation *compilation, OH_NN_Priority priority) | 设置模型计算的优先级。 |
| OH_NN_ReturnCode OH_NNCompilation_EnableFloat16(OH_NNCompilation *compilation, bool enableFloat16) | 是否以float16的浮点数精度计算。 |
| OH_NN_ReturnCode OH_NNCompilation_Build(OH_NNCompilation *compilation) | 执行模型编译。 |
| void OH_NNCompilation_Destroy(OH_NNCompilation **compilation) | 销毁编译实例。 |
### 执行推理相关接口
### 张量描述接口
| 接口名称 | 描述 |
| ------- | --- |
| NN_TensorDesc *OH_NNTensorDesc_Create() | 创建一个张量描述实例,用于后续创建张量。 |
| OH_NN_ReturnCode OH_NNTensorDesc_SetName(NN_TensorDesc *tensorDesc, const char *name) | 设置张量描述的名称。 |
| OH_NN_ReturnCode OH_NNTensorDesc_GetName(const NN_TensorDesc *tensorDesc, const char **name) | 获取张量描述的名称。 |
| OH_NN_ReturnCode OH_NNTensorDesc_SetDataType(NN_TensorDesc *tensorDesc, OH_NN_DataType dataType) | 设置张量描述的数据类型。 |
| OH_NN_ReturnCode OH_NNTensorDesc_GetDataType(const NN_TensorDesc *tensorDesc, OH_NN_DataType *dataType) | 获取张量描述的数据类型。 |
| OH_NN_ReturnCode OH_NNTensorDesc_SetShape(NN_TensorDesc *tensorDesc, const int32_t *shape, size_t shapeLength) | 设置张量描述的形状。 |
| OH_NN_ReturnCode OH_NNTensorDesc_GetShape(const NN_TensorDesc *tensorDesc, int32_t **shape, size_t *shapeLength) | 获取张量描述的形状。 |
| OH_NN_ReturnCode OH_NNTensorDesc_SetFormat(NN_TensorDesc *tensorDesc, OH_NN_Format format) | 设置张量描述的数据布局。 |
| OH_NN_ReturnCode OH_NNTensorDesc_GetFormat(const NN_TensorDesc *tensorDesc, OH_NN_Format *format) | 获取张量描述的数据布局。 |
| OH_NN_ReturnCode OH_NNTensorDesc_GetElementCount(const NN_TensorDesc *tensorDesc, size_t *elementCount) | 获取张量描述的元素个数。 |
| OH_NN_ReturnCode OH_NNTensorDesc_GetByteSize(const NN_TensorDesc *tensorDesc, size_t *byteSize) | 获取基于张量描述的形状和数据类型计算的数据占用字节数。 |
| OH_NN_ReturnCode OH_NNTensorDesc_Destroy(NN_TensorDesc **tensorDesc) | 销毁张量描述实例。 |
### 张量接口
| 接口名称 | 描述 |
| ------- | --- |
| NN_Tensor* OH_NNTensor_Create(size_t deviceID, NN_TensorDesc *tensorDesc) | 从张量描述创建张量实例,会申请设备共享内存。 |
| NN_Tensor* OH_NNTensor_CreateWithSize(size_t deviceID, NN_TensorDesc *tensorDesc, size_t size) | 按照指定内存大小和张量描述创建张量实例,会申请设备共享内存。 |
| NN_Tensor* OH_NNTensor_CreateWithFd(size_t deviceID, NN_TensorDesc *tensorDesc, int fd, size_t size, size_t offset) | 按照指定共享内存的文件描述符和张量描述创建张量实例,从而可以复用其他张量的设备共享内存。 |
| NN_TensorDesc* OH_NNTensor_GetTensorDesc(const NN_Tensor *tensor) | 获取张量内部的张量描述实例指针,从而可读取张量的属性,例如数据类型、形状等。 |
| void* OH_NNTensor_GetDataBuffer(const NN_Tensor *tensor) | 获取张量数据的内存地址,可以读写张量数据。 |
| OH_NN_ReturnCode OH_NNTensor_GetFd(const NN_Tensor *tensor, int *fd) | 获取张量数据所在共享内存的文件描述符文件描述符fd对应了一块设备共享内存。 |
| OH_NN_ReturnCode OH_NNTensor_GetSize(const NN_Tensor *tensor, size_t *size) | 获取张量数据所在共享内存的大小。 |
| OH_NN_ReturnCode OH_NNTensor_GetOffset(const NN_Tensor *tensor, size_t *offset) | 获取张量数据所在共享内存上的偏移量,张量数据可使用的大小为所在共享内存的大小减去偏移量。 |
| OH_NN_ReturnCode OH_NNTensor_Destroy(NN_Tensor **tensor) | 销毁张量实例。 |
### 执行推理接口
| 接口名称 | 描述 |
| ------- | --- |
| OH_NNExecutor *OH_NNExecutor_Construct(OH_NNCompilation *compilation) | 创建OH_NNExecutor类型的执行器实例。 |
| OH_NN_ReturnCode OH_NNExecutor_SetInput(OH_NNExecutor *executor, uint32_t inputIndex, const OH_NN_Tensor *tensor, const void *dataBuffer, size_t length) | 设置模型单个输入的数据。 |
| OH_NN_ReturnCode OH_NNExecutor_SetOutput(OH_NNExecutor *executor, uint32_t outputIndex, void *dataBuffer, size_t length) | 设置模型单个输出的缓冲区。 |
| OH_NN_ReturnCode OH_NNExecutor_Run(OH_NNExecutor *executor) | 执行推理。 |
| void OH_NNExecutor_Destroy(OH_NNExecutor **executor) | 销毁OH_NNExecutor实例释放实例占用的内存。 |
| OH_NN_ReturnCode OH_NNExecutor_GetOutputShape(OH_NNExecutor *executor, uint32_t outputIndex, int32_t **shape, uint32_t *shapeLength) | 获取输出张量的维度信息,用于输出张量具有动态形状的情况。 |
| OH_NN_ReturnCode OH_NNExecutor_GetInputCount(const OH_NNExecutor *executor, size_t *inputCount) | 获取输入张量的数量。 |
| OH_NN_ReturnCode OH_NNExecutor_GetOutputCount(const OH_NNExecutor *executor, size_t *outputCount) | 获取输出张量的数量。 |
| NN_TensorDesc* OH_NNExecutor_CreateInputTensorDesc(const OH_NNExecutor *executor, size_t index) | 由指定索引值创建一个输入张量的描述,用于读取张量的属性或创建张量实例。 |
| NN_TensorDesc* OH_NNExecutor_CreateOutputTensorDesc(const OH_NNExecutor *executor, size_t index) | 由指定索引值创建一个输出张量的描述,用于读取张量的属性或创建张量实例。 |
| OH_NN_ReturnCode OH_NNExecutor_GetInputDimRange(const OH_NNExecutor *executor, size_t index, size_t **minInputDims, size_t **maxInputDims, size_t *shapeLength) |获取所有输入张量的维度范围。当输入张量具有动态形状时,不同设备可能支持不同的维度范围。 |
| OH_NN_ReturnCode OH_NNExecutor_SetOnRunDone(OH_NNExecutor *executor, NN_OnRunDone onRunDone) | 设置异步推理结束后的回调处理函数,回调函数定义详见接口文档。 |
| OH_NN_ReturnCode OH_NNExecutor_SetOnServiceDied(OH_NNExecutor *executor, NN_OnServiceDied onServiceDied) | 设置异步推理执行期间设备驱动服务突然死亡时的回调处理函数,回调函数定义详见接口文档。 |
| OH_NN_ReturnCode OH_NNExecutor_RunSync(OH_NNExecutor *executor, NN_Tensor *inputTensor[], size_t inputCount, NN_Tensor *outputTensor[], size_t outputCount) | 执行同步推理。 |
| OH_NN_ReturnCode OH_NNExecutor_RunAsync(OH_NNExecutor *executor, NN_Tensor *inputTensor[], size_t inputCount, NN_Tensor *outputTensor[], size_t outputCount, int32_t timeout, void *userData) | 执行异步推理。 |
| void OH_NNExecutor_Destroy(OH_NNExecutor **executor) | 销毁执行器实例。 |
### 设备管理相关接口
### 设备管理接口
| 接口名称 | 描述 |
| ------- | --- |
| OH_NN_ReturnCode OH_NNDevice_GetAllDevicesID(const size_t **allDevicesID, uint32_t *deviceCount) | 获取对接到 Neural Network Runtime 的硬件ID。 |
| OH_NN_ReturnCode OH_NNDevice_GetAllDevicesID(const size_t **allDevicesID, uint32_t *deviceCount) | 获取对接到Neural Network Runtime的所有硬件ID。 |
| OH_NN_ReturnCode OH_NNDevice_GetName(size_t deviceID, const char **name) | 获取指定硬件的名称。 |
| OH_NN_ReturnCode OH_NNDevice_GetType(size_t deviceID, OH_NN_DeviceType *deviceType) | 获取指定硬件的类别信息。 |
## 开发步骤
@ -100,7 +152,7 @@ Neural Network Runtime的开发流程主要包含**模型构造**、**模型编
1. 创建应用样例文件。
首先创建Neural Network Runtime应用样例的源文件。在项目目录下执行以下命令创建`nnrt_example/`目录,在目录下创建 `nnrt_example.cpp` 源文件。
首先创建Neural Network Runtime应用样例的源文件。在项目目录下执行以下命令创建`nnrt_example/`目录,在目录下创建 `nnrt_example.cpp` 源文件。
```shell
mkdir ~/nnrt_example && cd ~/nnrt_example
@ -109,112 +161,245 @@ Neural Network Runtime的开发流程主要包含**模型构造**、**模型编
2. 导入Neural Network Runtime。
`nnrt_example.cpp` 文件的开头添加以下代码引入Neural Network Runtime模块
`nnrt_example.cpp` 文件的开头添加以下代码引入Neural Network Runtime。
```cpp
#include <cstdint>
#include <iostream>
#include <vector>
#include <cstdarg>
#include "hilog/log.h"
#include "neural_network_runtime/neural_network_runtime.h"
// 常量,用于指定输入、输出数据的字节长度
const size_t DATA_LENGTH = 4 * 12;
```
3. 构造模型。
使用Neural Network Runtime接口构造`Add`单算子样例模型。
3. 定义日志打印、设置输入数据、数据打印等辅助函数。
```cpp
OH_NN_ReturnCode BuildModel(OH_NNModel** pModel)
#define LOG_DOMAIN 0xD002101
#define LOG_TAG "NNRt"
#define LOGD(...) OH_LOG_DEBUG(LOG_APP, __VA_ARGS__)
#define LOGI(...) OH_LOG_INFO(LOG_APP, __VA_ARGS__)
#define LOGW(...) OH_LOG_WARN(LOG_APP, __VA_ARGS__)
#define LOGE(...) OH_LOG_ERROR(LOG_APP, __VA_ARGS__)
#define LOGF(...) OH_LOG_FATAL(LOG_APP, __VA_ARGS__)
// 返回值检查宏
#define CHECKNEQ(realRet, expectRet, retValue, ...) \
do { \
if ((realRet) != (expectRet)) { \
printf(__VA_ARGS__); \
return (retValue); \
} \
} while (0)
#define CHECKEQ(realRet, expectRet, retValue, ...) \
do { \
if ((realRet) == (expectRet)) { \
printf(__VA_ARGS__); \
return (retValue); \
} \
} while (0)
// 设置输入数据用于推理
OH_NN_ReturnCode SetInputData(NN_Tensor* inputTensor[], size_t inputSize)
{
// 创建模型实例,进行模型构造
OH_NNModel* model = OH_NNModel_Construct();
if (model == nullptr) {
std::cout << "Create model failed." << std::endl;
return OH_NN_MEMORY_ERROR;
OH_NN_DataType dataType(OH_NN_FLOAT32);
OH_NN_ReturnCode ret{OH_NN_FAILED};
size_t elementCount = 0;
for (size_t i = 0; i < inputSize; ++i) {
// 获取张量的数据内存
auto data = OH_NNTensor_GetDataBuffer(inputTensor[i]);
CHECKEQ(data, nullptr, OH_NN_FAILED, "Failed to get data buffer.");
// 获取张量的描述
auto desc = OH_NNTensor_GetTensorDesc(inputTensor[i]);
CHECKEQ(desc, nullptr, OH_NN_FAILED, "Failed to get desc.");
// 获取张量的数据类型
ret = OH_NNTensorDesc_GetDataType(desc, &dataType);
CHECKNEQ(ret, OH_NN_SUCCESS, OH_NN_FAILED, "Failed to get data type.");
// 获取张量的元素个数
ret = OH_NNTensorDesc_GetElementCount(desc, &elementCount);
CHECKNEQ(ret, OH_NN_SUCCESS, OH_NN_FAILED, "Failed to get element count.");
switch(dataType) {
case OH_NN_FLOAT32: {
float* floatValue = reinterpret_cast<float*>(data);
for (size_t j = 0; j < elementCount; ++j) {
floatValue[j] = static_cast<float>(j);
}
break;
}
case OH_NN_INT32: {
int* intValue = reinterpret_cast<int*>(data);
for (size_t j = 0; j < elementCount; ++j) {
intValue[j] = static_cast<int>(j);
}
break;
}
default:
return OH_NN_FAILED;
}
}
return OH_NN_SUCCESS;
}
OH_NN_ReturnCode Print(NN_Tensor* outputTensor[], size_t outputSize)
{
OH_NN_DataType dataType(OH_NN_FLOAT32);
OH_NN_ReturnCode ret{OH_NN_FAILED};
size_t elementCount = 0;
for (size_t i = 0; i < outputSize; ++i) {
auto data = OH_NNTensor_GetDataBuffer(outputTensor[i]);
CHECKEQ(data, nullptr, OH_NN_FAILED, "Failed to get data buffer.");
auto desc = OH_NNTensor_GetTensorDesc(outputTensor[i]);
CHECKEQ(desc, nullptr, OH_NN_FAILED, "Failed to get desc.");
ret = OH_NNTensorDesc_GetDataType(desc, &dataType);
CHECKNEQ(ret, OH_NN_SUCCESS, OH_NN_FAILED, "Failed to get data type.");
ret = OH_NNTensorDesc_GetElementCount(desc, &elementCount);
CHECKNEQ(ret, OH_NN_SUCCESS, OH_NN_FAILED, "Failed to get element count.");
switch(dataType) {
case OH_NN_FLOAT32: {
float* floatValue = reinterpret_cast<float*>(data);
for (size_t j = 0; j < elementCount; ++j) {
std::cout << "Output index: " << j << ", value is: " << floatValue[j] << "." << std::endl;
}
break;
}
case OH_NN_INT32: {
int* intValue = reinterpret_cast<int*>(data);
for (size_t j = 0; j < elementCount; ++j) {
std::cout << "Output index: " << j << ", value is: " << intValue[j] << "." << std::endl;
}
break;
}
default:
return OH_NN_FAILED;
}
}
// 添加Add算子的第一个输入Tensor类型为float32张量形状为[1, 2, 2, 3]
int32_t inputDims[4] = {1, 2, 2, 3};
OH_NN_Tensor input1 = {OH_NN_FLOAT32, 4, inputDims, nullptr, OH_NN_TENSOR};
OH_NN_ReturnCode ret = OH_NNModel_AddTensor(model, &input1);
if (ret != OH_NN_SUCCESS) {
std::cout << "BuildModel failed, add Tensor of first input failed." << std::endl;
return ret;
}
// 添加Add算子的第二个输入Tensor类型为float32张量形状为[1, 2, 2, 3]
OH_NN_Tensor input2 = {OH_NN_FLOAT32, 4, inputDims, nullptr, OH_NN_TENSOR};
ret = OH_NNModel_AddTensor(model, &input2);
if (ret != OH_NN_SUCCESS) {
std::cout << "BuildModel failed, add Tensor of second input failed." << std::endl;
return ret;
}
// 添加Add算子的参数Tensor该参数Tensor用于指定激活函数的类型Tensor的数据类型为int8。
int32_t activationDims = 1;
int8_t activationValue = OH_NN_FUSED_NONE;
OH_NN_Tensor activation = {OH_NN_INT8, 1, &activationDims, nullptr, OH_NN_ADD_ACTIVATIONTYPE};
ret = OH_NNModel_AddTensor(model, &activation);
if (ret != OH_NN_SUCCESS) {
std::cout << "BuildModel failed, add Tensor of activation failed." << std::endl;
return ret;
}
// 将激活函数类型设置为OH_NN_FUSED_NONE表示该算子不添加激活函数。
ret = OH_NNModel_SetTensorData(model, 2, &activationValue, sizeof(int8_t));
if (ret != OH_NN_SUCCESS) {
std::cout << "BuildModel failed, set value of activation failed." << std::endl;
return ret;
}
// 设置Add算子的输出类型为float32张量形状为[1, 2, 2, 3]
OH_NN_Tensor output = {OH_NN_FLOAT32, 4, inputDims, nullptr, OH_NN_TENSOR};
ret = OH_NNModel_AddTensor(model, &output);
if (ret != OH_NN_SUCCESS) {
std::cout << "BuildModel failed, add Tensor of output failed." << std::endl;
return ret;
}
// 指定Add算子的输入、参数和输出索引
uint32_t inputIndicesValues[2] = {0, 1};
uint32_t paramIndicesValues = 2;
uint32_t outputIndicesValues = 3;
OH_NN_UInt32Array paramIndices = {&paramIndicesValues, 1};
OH_NN_UInt32Array inputIndices = {inputIndicesValues, 2};
OH_NN_UInt32Array outputIndices = {&outputIndicesValues, 1};
// 向模型实例添加Add算子
ret = OH_NNModel_AddOperation(model, OH_NN_OPS_ADD, &paramIndices, &inputIndices, &outputIndices);
if (ret != OH_NN_SUCCESS) {
std::cout << "BuildModel failed, add operation failed." << std::endl;
return ret;
}
// 设置模型实例的输入、输出索引
ret = OH_NNModel_SpecifyInputsAndOutputs(model, &inputIndices, &outputIndices);
if (ret != OH_NN_SUCCESS) {
std::cout << "BuildModel failed, specify inputs and outputs failed." << std::endl;
return ret;
}
// 完成模型实例的构建
ret = OH_NNModel_Finish(model);
if (ret != OH_NN_SUCCESS) {
std::cout << "BuildModel failed, error happened when finishing model construction." << std::endl;
return ret;
}
*pModel = model;
return OH_NN_SUCCESS;
}
```
4. 查询Neural Network Runtime已经对接的加速芯片
4. 构造模型。
Neural Network Runtime支持通过HDI接口对接多种加速芯片。在执行模型编译前需要查询当前设备下Neural Network Runtime已经对接的加速芯片。每个加速芯片对应唯一的ID值在编译阶段需要通过设备ID指定模型编译的芯片。
使用Neural Network Runtime的模型构造接口构造`Add`单算子样例模型。
```cpp
OH_NN_ReturnCode BuildModel(OH_NNModel** pmodel)
{
// 创建模型实例model进行模型构造
OH_NNModel* model = OH_NNModel_Construct();
CHECKEQ(model, nullptr, -1, "Create model failed.");
// 添加Add算子的第一个输入张量类型为float32张量形状为[1, 2, 2, 3]
NN_TensorDesc* tensorDesc = OH_NNTensorDesc_Create();
CHECKEQ(tensorDesc, nullptr, -1, "Create TensorDesc failed.");
int32_t inputDims[4] = {1, 2, 2, 3};
returnCode = OH_NNTensorDesc_SetShape(tensorDesc, inputDims, 4);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set TensorDesc shape failed.");
returnCode = OH_NNTensorDesc_SetDataType(tensorDesc, OH_NN_FLOAT32);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set TensorDesc data type failed.");
returnCode = OH_NNTensorDesc_SetFormat(tensorDesc, OH_NN_FORMAT_NONE);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set TensorDesc format failed.");
returnCode = OH_NNModel_AddTensorToModel(model, tensorDesc);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Add first TensorDesc to model failed.");
returnCode = OH_NNModel_SetTensorType(model, 0, OH_NN_TENSOR);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set model tensor type failed.");
// 添加Add算子的第二个输入张量类型为float32张量形状为[1, 2, 2, 3]
tensorDesc = OH_NNTensorDesc_Create();
CHECKEQ(tensorDesc, nullptr, -1, "Create TensorDesc failed.");
returnCode = OH_NNTensorDesc_SetShape(tensorDesc, inputDims, 4);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set TensorDesc shape failed.");
returnCode = OH_NNTensorDesc_SetDataType(tensorDesc, OH_NN_FLOAT32);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set TensorDesc data type failed.");
returnCode = OH_NNTensorDesc_SetFormat(tensorDesc, OH_NN_FORMAT_NONE);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set TensorDesc format failed.");
returnCode = OH_NNModel_AddTensorToModel(model, tensorDesc);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Add second TensorDesc to model failed.");
returnCode = OH_NNModel_SetTensorType(model, 1, OH_NN_TENSOR);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set model tensor type failed.");
// 添加Add算子的参数张量该参数张量用于指定激活函数的类型张量的数据类型为int8。
tensorDesc = OH_NNTensorDesc_Create();
CHECKEQ(tensorDesc, nullptr, -1, "Create TensorDesc failed.");
int32_t activationDims = 1;
returnCode = OH_NNTensorDesc_SetShape(tensorDesc, &activationDims, 1);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set TensorDesc shape failed.");
returnCode = OH_NNTensorDesc_SetDataType(tensorDesc, OH_NN_INT8);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set TensorDesc data type failed.");
returnCode = OH_NNTensorDesc_SetFormat(tensorDesc, OH_NN_FORMAT_NONE);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set TensorDesc format failed.");
returnCode = OH_NNModel_AddTensorToModel(model, tensorDesc);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Add second TensorDesc to model failed.");
returnCode = OH_NNModel_SetTensorType(model, 2, OH_NN_ADD_ACTIVATIONTYPE);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set model tensor type failed.");
// 将激活函数类型设置为OH_NNBACKEND_FUSED_NONE表示该算子不添加激活函数。
int8_t activationValue = OH_NN_FUSED_NONE;
returnCode = OH_NNModel_SetTensorData(model, 2, &activationValue, sizeof(int8_t));
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set model tensor data failed.");
// 设置Add算子的输出张量类型为float32张量形状为[1, 2, 2, 3]
tensorDesc = OH_NNTensorDesc_Create();
CHECKEQ(tensorDesc, nullptr, -1, "Create TensorDesc failed.");
returnCode = OH_NNTensorDesc_SetShape(tensorDesc, inputDims, 4);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set TensorDesc shape failed.");
returnCode = OH_NNTensorDesc_SetDataType(tensorDesc, OH_NN_FLOAT32);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set TensorDesc data type failed.");
returnCode = OH_NNTensorDesc_SetFormat(tensorDesc, OH_NN_FORMAT_NONE);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set TensorDesc format failed.");
returnCode = OH_NNModel_AddTensorToModel(model, tensorDesc);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Add forth TensorDesc to model failed.");
returnCode = OH_NNModel_SetTensorType(model, 3, OH_NN_TENSOR);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Set model tensor type failed.");
// 指定Add算子的输入张量、参数张量和输出张量的索引
uint32_t inputIndicesValues[2] = {0, 1};
uint32_t paramIndicesValues = 2;
uint32_t outputIndicesValues = 3;
OH_NN_UInt32Array paramIndices = {&paramIndicesValues, 1 * 4};
OH_NN_UInt32Array inputIndices = {inputIndicesValues, 2 * 4};
OH_NN_UInt32Array outputIndices = {&outputIndicesValues, 1 * 4};
// 向模型实例添加Add算子
returnCode = OH_NNModel_AddOperation(model, OH_NN_OPS_ADD, &paramIndices, &inputIndices, &outputIndices);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Add operation to model failed.");
// 设置模型实例的输入张量、输出张量的索引
returnCode = OH_NNModel_SpecifyInputsAndOutputs(model, &inputIndices, &outputIndices);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Specify model inputs and outputs failed.");
// 完成模型实例的构建
returnCode = OH_NNModel_Finish(model);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "Build model failed.");
// 返回模型实例
*pmodel = model;
return OH_NN_SUCCESS;
}
```
5. 查询Neural Network Runtime已经对接的AI加速芯片。
Neural Network Runtime支持通过HDI接口对接多种AI加速芯片。在执行模型编译前需要查询当前设备下Neural Network Runtime已经对接的AI加速芯片。每个AI加速芯片对应唯一的ID值在编译阶段需要通过设备ID指定模型编译的芯片。
```cpp
void GetAvailableDevices(std::vector<size_t>& availableDevice)
{
@ -235,116 +420,140 @@ Neural Network Runtime的开发流程主要包含**模型构造**、**模型编
}
```
5. 在指定的设备上编译模型。
6. 在指定的设备上编译模型。
Neural Network Runtime使用抽象的模型表达描述AI模型的拓扑结构,在加速芯片上执行前需要通过Neural Network Runtime提供的编译模块将抽象的模型表达下发至芯片驱动层转换成可以直接推理计算的格式。
Neural Network Runtime使用抽象的模型表达描述AI模型的拓扑结构。在AI加速芯片上执行前需要通过Neural Network Runtime提供的编译模块来创建编译实例并由编译实例将抽象的模型表达下发至芯片驱动层,转换成可以直接推理计算的格式,即模型编译
```cpp
OH_NN_ReturnCode CreateCompilation(OH_NNModel* model, const std::vector<size_t>& availableDevice, OH_NNCompilation** pCompilation)
OH_NN_ReturnCode CreateCompilation(OH_NNModel* model, const std::vector<size_t>& availableDevice,
OH_NNCompilation** pCompilation)
{
// 创建编译实例,用于将模型传递至底层硬件编译
// 创建编译实例compilation将构图的模型实例或MSLite传下来的模型实例传入
OH_NNCompilation* compilation = OH_NNCompilation_Construct(model);
if (compilation == nullptr) {
std::cout << "CreateCompilation failed, error happended when creating compilation." << std::endl;
return OH_NN_MEMORY_ERROR;
}
CHECKEQ(compilation, nullptr, -1, "OH_NNCore_ConstructCompilationWithNNModel failed.");
// 设置编译的硬件、缓存路径、性能模式、计算优先级、是否开启float16低精度计算等选项
// 选择在第一个设备上编译模型
OH_NN_ReturnCode ret = OH_NNCompilation_SetDevice(compilation, availableDevice[0]);
if (ret != OH_NN_SUCCESS) {
std::cout << "CreateCompilation failed, error happened when setting device." << std::endl;
return ret;
}
returnCode = OH_NNCompilation_SetDevice(compilation, availableDevice[0]);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNCompilation_SetDevice failed.");
// 将模型编译结果缓存在/data/local/tmp目录下版本号指定为1
ret = OH_NNCompilation_SetCache(compilation, "/data/local/tmp", 1);
if (ret != OH_NN_SUCCESS) {
std::cout << "CreateCompilation failed, error happened when setting cache path." << std::endl;
return ret;
}
returnCode = OH_NNCompilation_SetCache(compilation, "/data/local/tmp", 1);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNCompilation_SetCache failed.");
// 完成编译设置,进行模型编译
ret = OH_NNCompilation_Build(compilation);
if (ret != OH_NN_SUCCESS) {
std::cout << "CreateCompilation failed, error happened when building compilation." << std::endl;
return ret;
}
// 设置硬件性能模式
returnCode = OH_NNCompilation_SetPerformanceMode(compilation, OH_NN_PERFORMANCE_EXTREME);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNCompilation_SetPerformanceMode failed.");
// 设置推理执行优先级
returnCode = OH_NNCompilation_SetPriority(compilation, OH_NN_PRIORITY_HIGH);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNCompilation_SetPriority failed.");
// 是否开启FP16计算模式
returnCode = OH_NNCompilation_EnableFloat16(compilation, false);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNCompilation_EnableFloat16 failed.");
// 执行模型编译
returnCode = OH_NNCompilation_Build(compilation);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNCompilation_Build failed.");
*pCompilation = compilation;
return OH_NN_SUCCESS;
}
```
6. 创建执行器。
7. 创建执行器。
完成模型编译后需要调用Neural Network Runtime的执行模块创建推理执行器。执行阶段设置模型输入、获取模型输出和触发推理计算的操作均围绕执行器完成。
完成模型编译后需要调用Neural Network Runtime的执行模块通过编译实例创建执行器。模型推理阶段中的设置模型输入、触发推理计算以及获取模型输出等操作均需要围绕执行器完成。
```cpp
OH_NNExecutor* CreateExecutor(OH_NNCompilation* compilation)
{
// 创建执行实例
OH_NNExecutor* executor = OH_NNExecutor_Construct(compilation);
// 通过编译实例compilation创建执行器executor
OH_NNExecutor *executor = OH_NNExecutor_Construct(compilation);
CHECKEQ(executor, nullptr, -1, "OH_NNExecutor_Construct failed.");
return executor;
}
```
7. 执行推理计算,并打印计算结果。
8. 执行推理计算,并打印推理结果。
通过执行模块提供的接口,将推理计算所需要的输入数据传递给执行器,触发执行器完成一次推理计算,获取模型的推理计算结果。
通过执行模块提供的接口,将推理计算所需要的输入数据传递给执行器,触发执行器完成一次推理计算,获取模型的推理结果并打印
```cpp
OH_NN_ReturnCode Run(OH_NNExecutor* executor)
OH_NN_ReturnCode Run(OH_NNExecutor* executor, const std::vector<size_t>& availableDevice)
{
// 构造示例数据
float input1[12] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11};
float input2[12] = {11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22};
int32_t inputDims[4] = {1, 2, 2, 3};
OH_NN_Tensor inputTensor1 = {OH_NN_FLOAT32, 4, inputDims, nullptr, OH_NN_TENSOR};
OH_NN_Tensor inputTensor2 = {OH_NN_FLOAT32, 4, inputDims, nullptr, OH_NN_TENSOR};
// 设置执行的输入
// 设置执行的第一个输入输入数据由input1指定
OH_NN_ReturnCode ret = OH_NNExecutor_SetInput(executor, 0, &inputTensor1, input1, DATA_LENGTH);
if (ret != OH_NN_SUCCESS) {
std::cout << "Run failed, error happened when setting first input." << std::endl;
return ret;
// 从executor获取输入输出信息
// 获取输入张量的个数
size_t inputCount = 0;
returnCode = OH_NNExecutor_GetInputCount(executor, &inputCount);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNExecutor_GetInputCount failed.");
std::vector<NN_TensorDesc*> inputTensorDescs;
NN_TensorDesc* tensorDescTmp = nullptr;
for (size_t i = 0; i < inputCount; ++i) {
// 创建输入张量的描述
tensorDescTmp = OH_NNExecutor_CreateInputTensorDesc(executor, i);
CHECKEQ(tensorDescTmp, nullptr, -1, "OH_NNExecutor_CreateInputTensorDesc failed.");
inputTensorDescs.emplace_back(tensorDescTmp);
}
// 获取输出张量的个数
size_t outputCount = 0;
returnCode = OH_NNExecutor_GetOutputCount(executor, &outputCount);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNExecutor_GetOutputCount failed.");
std::vector<NN_TensorDesc*> outputTensorDescs;
for (size_t i = 0; i < outputCount; ++i) {
// 创建输出张量的描述
tensorDescTmp = OH_NNExecutor_CreateOutputTensorDesc(executor, i);
CHECKEQ(tensorDescTmp, nullptr, -1, "OH_NNExecutor_CreateOutputTensorDesc failed.");
outputTensorDescs.emplace_back(tensorDescTmp);
}
// 设置执行的第二个输入输入数据由input2指定
ret = OH_NNExecutor_SetInput(executor, 1, &inputTensor2, input2, DATA_LENGTH);
if (ret != OH_NN_SUCCESS) {
std::cout << "Run failed, error happened when setting second input." << std::endl;
return ret;
// 创建输入和输出张量
NN_Tensor* inputTensors[inputCount];
NN_Tensor* tensor = nullptr;
for (size_t i = 0; i < inputCount; ++i) {
tensor = nullptr;
tensor = OH_NNTensor_Create(availableDevice[0], inputTensorDescs[i]);
CHECKEQ(tensor, nullptr, -1, "OH_NNTensor_Create failed.");
inputTensors[i] = tensor;
}
NN_Tensor* outputTensors[outputCount];
for (size_t i = 0; i < outputCount; ++i) {
tensor = nullptr;
tensor = OH_NNTensor_Create(availableDevice[0], outputTensorDescs[i]);
CHECKEQ(tensor, nullptr, -1, "OH_NNTensor_Create failed.");
outputTensors[i] = tensor;
}
// 设置输出的数据缓冲区OH_NNExecutor_Run执行计算后输出结果将保留在output中
float output[12];
ret = OH_NNExecutor_SetOutput(executor, 0, output, DATA_LENGTH);
if (ret != OH_NN_SUCCESS) {
std::cout << "Run failed, error happened when setting output buffer." << std::endl;
return ret;
}
// 设置输入张量的数据
returnCode = SetInputData(inputTensors, inputCount);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "SetInputData failed.");
// 执行计算
ret = OH_NNExecutor_Run(executor);
if (ret != OH_NN_SUCCESS) {
std::cout << "Run failed, error doing execution." << std::endl;
return ret;
}
// 执行推理
returnCode = OH_NNExecutor_RunSync(executor, inputTensors, inputCount, outputTensors, outputCount);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNExecutor_RunSync failed.");
// 打印输出结果
for (uint32_t i = 0; i < 12; i++) {
std::cout << "Output index: " << i << ", value is: " << output[i] << "." << std::endl;
// 打印输出张量的数据
Print(outputTensors, outputCount);
// 清理输入和输出张量以及张量描述
for (size_t i = 0; i < inputCount; ++i) {
returnCode = OH_NNTensor_Destroy(&inputTensors[i]);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNTensor_Destroy failed.");
returnCode = OH_NNTensorDesc_Destroy(&inputTensorDescs[i]);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNTensorDesc_Destroy failed.");
}
for (size_t i = 0; i < outputCount; ++i) {
returnCode = OH_NNTensor_Destroy(&outputTensors[i]);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNTensor_Destroy failed.");
returnCode = OH_NNTensorDesc_Destroy(&outputTensorDescs[i]);
CHECKNEQ(returnCode, OH_NN_SUCCESS, -1, "OH_NNTensorDesc_Destroy failed.");
}
return OH_NN_SUCCESS;
}
```
8. 构建端到端模型构造-编译-执行流程。
9. 构建端到端模型构造-编译-执行流程。
步骤3-步骤7实现了模型的模型构造、编译和执行流程并封装成4个函数便于模块化开发。以下示例代码将4个函数串联成完整的Neural Network Runtime开发流程。
步骤4-步骤8实现了模型的模型构造、编译和执行流程并封装成多个函数便于模块化开发。以下示例代码将串联这些函数 形成一个完整的Neural Network Runtime使用流程。
```cpp
int main()
{
@ -353,7 +562,8 @@ Neural Network Runtime的开发流程主要包含**模型构造**、**模型编
OH_NNExecutor* executor = nullptr;
std::vector<size_t> availableDevices;
// 模型构造阶段
// 模型构造
OH_NNModel* model = nullptr;
OH_NN_ReturnCode ret = BuildModel(&model);
if (ret != OH_NN_SUCCESS) {
std::cout << "BuildModel failed." << std::endl;
@ -369,7 +579,7 @@ Neural Network Runtime的开发流程主要包含**模型构造**、**模型编
return -1;
}
// 模型编译阶段
// 模型编译
ret = CreateCompilation(model, availableDevices, &compilation);
if (ret != OH_NN_SUCCESS) {
std::cout << "CreateCompilation failed." << std::endl;
@ -378,28 +588,29 @@ Neural Network Runtime的开发流程主要包含**模型构造**、**模型编
return -1;
}
// 销毁模型实例
OH_NNModel_Destroy(&model);
// 创建模型的推理执行器
executor = CreateExecutor(compilation);
if (executor == nullptr) {
std::cout << "CreateExecutor failed, no executor is created." << std::endl;
OH_NNModel_Destroy(&model);
OH_NNCompilation_Destroy(&compilation);
return -1;
}
// 使用上一步创建的执行器,执行单步推理计算
ret = Run(executor);
// 销毁编译实例
OH_NNCompilation_Destroy(&compilation);
// 使用上一步创建的执行器,执行推理计算
ret = Run(executor, availableDevices);
if (ret != OH_NN_SUCCESS) {
std::cout << "Run failed." << std::endl;
OH_NNModel_Destroy(&model);
OH_NNCompilation_Destroy(&compilation);
OH_NNExecutor_Destroy(&executor);
return -1;
}
// 释放申请的资源
OH_NNModel_Destroy(&model);
OH_NNCompilation_Destroy(&compilation);
// 销毁执行器实例
OH_NNExecutor_Destroy(&executor);
return 0;
@ -420,7 +631,8 @@ Neural Network Runtime的开发流程主要包含**模型构造**、**模型编
)
target_link_libraries(nnrt_example
neural_network_runtime.z
neural_network_runtime
neural_network_core
)
```
@ -447,18 +659,18 @@ Neural Network Runtime的开发流程主要包含**模型构造**、**模型编
如果样例执行正常,应该得到以下输出。
```text
Output index: 0, value is: 11.000000.
Output index: 1, value is: 13.000000.
Output index: 2, value is: 15.000000.
Output index: 3, value is: 17.000000.
Output index: 4, value is: 19.000000.
Output index: 5, value is: 21.000000.
Output index: 6, value is: 23.000000.
Output index: 7, value is: 25.000000.
Output index: 8, value is: 27.000000.
Output index: 9, value is: 29.000000.
Output index: 10, value is: 31.000000.
Output index: 11, value is: 33.000000.
Output index: 0, value is: 0.000000.
Output index: 1, value is: 2.000000.
Output index: 2, value is: 4.000000.
Output index: 3, value is: 6.000000.
Output index: 4, value is: 8.000000.
Output index: 5, value is: 10.000000.
Output index: 6, value is: 12.000000.
Output index: 7, value is: 14.000000.
Output index: 8, value is: 16.000000.
Output index: 9, value is: 18.000000.
Output index: 10, value is: 20.000000.
Output index: 11, value is: 22.000000.
```
4. 检查模型缓存(可选)。
@ -476,15 +688,10 @@ Neural Network Runtime的开发流程主要包含**模型构造**、**模型编
以下为打印结果:
```text
# 0.nncache cache_info.nncache
# 0.nncache 1.nncache 2.nncache cache_info.nncache
```
如果缓存不再使用,需要手动删除缓存,可以参考以下命令,删除缓存文件。
```shell
rm /data/local/tmp/*nncache
```
## 相关实例
第三方AI推理框架对接Neural Network Runtime的流程可以参考以下相关实例
- [Tensorflow Lite接入NNRt Delegate开发指南](https://gitee.com/openharmony-sig/neural_network_runtime/tree/master/example/deep_learning_framework)
```