1.在Ubuntu20.04系统中完成环境搭建

1.1 安装模型转换环境

1.1.1 安装Ubuntu依赖包

sudo apt-get install libxslt1-dev zlib1g zlib1g-dev libglib2.0-0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc g++

1.1.2 python 环境准备

  • Anaconda环境安装(建议在Anaconda环境下创建python环境)
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
source ~/.bashrc
#检验安装成功
conda --version
#退出环境
conda deactivate

  • 2. 创建Python环境
conda create -n rknn2 python=3.6
conda activate rknn2
# rknn_toolkit2对numpy存在特定依赖,版本不能高于2.0,因此需要先安装numpy==1.16.6
pip install numpy==1.16.6
# 在编译过程中会用到pyyaml
pip install pyyaml
  • 获取最新的rknn-toolkit2工具
#方式一、
git clone https://github.com/rockchip-linux/rknn-toolkit2.git
cd rknn-toolkit2/rknn-toolkit2/package
pip install rknn_toolkit2-1.6.0+81f21f4d-cp38-cp38-linux_x86_64.whl

#方式二、
#通过https://github.com/rockchip-linux/rknn-toolkit2,#依次进入rknn-toolkit2/package文件夹
#直接下载rknn_toolkit2-1.6.0+81f21f4d-cp38-cp38-linux_x86_64.whl进行安装

注:安装时建议找个梯子,否则很可能会导致安装失败,网上使用wget https://bj.bcebos.com/fastdeploy/third_libs/rknn_toolkit2-1.5.1b19+4c81851a-cp的方式安装,在最后的onnx模型转rknn模型时会报错

  • 安装paddle2onnx

    这一步为下面模型转换做打算:
    paddle2onnx的作用:
    paddle模型------>onnx模型
    RKNNtoolkit2的作用:
    onnx模型------>rknn模型

#安装paddle2onnx的过程极为简单,在终端输入:
pip install paddle2onnx

1.1.3 下载FastDeploy库

注意,这里的FastDeploy库是在RK3588板端下载

在板子上创建一个PaddleOCR文件夹,使用git clone https://github.com/PaddlePaddle/FastDeploy.git指令拉取代码。
这里有个小建议,如果git clone的速度实在是太慢,而你又没有clash等科学软件,可以到码云上拉取代码。

# 此处指令在板端完成
mkdir ~/PaddleOCR && cd ~/PaddleOCR

# 使用GitHub
git clone https://github.com/PaddlePaddle/FastDeploy.git
# 使用码云
git clone https://gitee.com/paddlepaddle/FastDeploy.git

代码拉取完成后,在PC端Ubuntu系统中创建PaddleOCR文件夹,并将板端~/PaddleOCR/FastDeploy/tree/develop/examples/vision/ocr/PP-OCR/rockchip文件夹下的rknpu2_tools文件夹拷贝到PC端~/PaddleOCR文件夹下。然后修改文件路径:

# 以下是在PC端操作
mkdir ~/PaddleOCR && cd ~/paddleOCR
cd rknpu2_tools/config
vim ppocrv3_det.yaml
#修改model_path文件路径为./ch_PP-OCRv4_det_infer/ch_PP-OCRv4_det_infer.onnx
#修改output_folder路径为./ch_PP-OCRv4_det_infer
#############################
mean:
  -
    - 123.675
    - 116.28
    - 103.53
std:
  -
    - 58.395
    - 57.12
    - 57.375
model_path: ./ch_PP-OCRv3_det_infer/ch_PP-OCRv3_det_infer.onnx
outputs_nodes:
do_quantization: False
dataset:
output_folder: "./ch_PP-OCRv3_det_infer"
#################################################

vim ppocrv3_rec.yaml
#修改model_path路径为./ch_PP-OCRv4_rec_infer/ch_PP-OCRv4_rec_infer.onnx
#修改output_folder路径为./ch_PP-OCRv4_rec_infer
mean:
  -
    - 127.5
    - 127.5
    - 127.5
std:
  -
    - 127.5
    - 127.5
    - 127.5
model_path: ./ch_PP-OCRv3_rec_infer/ch_PP-OCRv3_rec_infer.onnx
outputs_nodes:
do_quantization: False
dataset:
output_folder: "./ch_PP-OCRv3_rec_infer"

1.1.4 模型转换

获取测试模型

#此处在PC端操作
cd ~/PaddleOCR
# 下载PP-OCRv3文字检测模型并解压
wget -c https://paddleocr.bj.bcebos.com/PP-OCRv4/chinese/ch_PP-OCRv4_det_infer.tar
tar -xvf ch_PP-OCRv4_det_infer.tar

# 下载PP-OCRv3文字识别模型并解压
wget -c https://paddleocr.bj.bcebos.com/PP-OCRv4/chinese/ch_PP-OCRv4_rec_infer.tar
tar -xvf ch_PP-OCRv4_rec_infer.tar

# 下载文字方向分类器模型并解压
wget -c https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_cls_infer.tar
tar -xvf ch_ppocr_mobile_v2.0_cls_infer.tar

#上面操作结束后会得到三个装有paddle模型的文件夹,随即进入paddle => onnx模型转换步骤
#转换模型到ONNX格式的模型
paddle2onnx --model_dir ch_PP-OCRv4_det_infer \
            --model_filename inference.pdmodel \
            --params_filename inference.pdiparams \
            --save_file ch_PP-OCRv4_det_infer/ch_PP-OCRv4_det_infer.onnx \
            --opset_version 12 \
            --enable_onnx_checker True \
            --enable_dev_version True



paddle2onnx --model_dir ch_ppocr_mobile_v2.0_cls_infer \
            --model_filename inference.pdmodel \
            --params_filename inference.pdiparams \
            --save_file ch_ppocr_mobile_v2.0_cls_infer/ch_ppocr_mobile_v2.0_cls_infer.onnx \
            --opset_version 12 \
            --enable_onnx_checker True \
            --enable_dev_version True


paddle2onnx --model_dir ch_PP-OCRv4_rec_infer \
            --model_filename inference.pdmodel \
            --params_filename inference.pdiparams \
            --save_file ch_PP-OCRv4_rec_infer/ch_PP-OCRv4_rec_infer.onnx \
            --opset_version 12 \
            --enable_onnx_checker True \
            --enable_dev_version True
            
            
# 固定模型的输入shape
python -m paddle2onnx.optimize --input_model ch_PP-OCRv4_det_infer/ch_PP-OCRv4_det_infer.onnx \
                               --output_model ch_PP-OCRv4_det_infer/ch_PP-OCRv4_det_infer.onnx \
                               --input_shape_dict "{'x':[1,3,960,960]}"

python -m paddle2onnx.optimize --input_model ch_ppocr_mobile_v2.0_cls_infer/ch_ppocr_mobile_v2.0_cls_infer.onnx \
                               --output_model ch_ppocr_mobile_v2.0_cls_infer/ch_ppocr_mobile_v2.0_cls_infer.onnx \
                               --input_shape_dict "{'x':[1,3,48,192]}"

python -m paddle2onnx.optimize --input_model ch_PP-OCRv4_rec_infer/ch_PP-OCRv4_rec_infer.onnx \
                               --output_model ch_PP-OCRv4_rec_infer/ch_PP-OCRv4_rec_infer.onnx \
                               --input_shape_dict "{'x':[1,3,48,320]}"

#至此执行完毕后,paddle模型转到onnx模型完毕,接下来是onnx模型转到rknn模型。

#onnx模型转到rknn模型。
#随后输入以下三条指令
python rknpu2_tools/export.py --config_path rknpu2_tools/config/ppocrv3_det.yaml --target_platform rk3588
python rknpu2_tools/export.py --config_path rknpu2_tools/config/ppocrv3_rec.yaml --target_platform rk3588
python rknpu2_tools/export.py --config_path rknpu2_tools/config/ppocrv3_cls.yaml --target_platform rk3588
#当三条指令结束运行时,终端内容应该都是
D RKNN: [03:17:31.652] ----------------------------------------
D RKNN: [03:17:31.652] <<<<<<<< end: N4rknn21RKNNMemStatisticsPassE
I rknn buiding done.
W init_runtime: Target is None, use simulator!
Export OK!
#如果提示错误,说明你的rknn_toolkit2版本过时了,这时候要卸载旧的rknn_toolkit2,按照本章的1.1.2节重新安装rknn_toolkit2
#这时候在每个模型的解压文件夹下就得到了rknn模型。
~/PaddleOCR/ch_ppocr_mobile_v2.0_cls_infer/ch_ppocr_mobile_v20_cls_infer_rk3588_unquantized.rknn
~/PaddleOCR/ch_PP-OCRv4_det_infer/ch_PP-OCRv4_det_infer_rk3588_unquantized.rknn
~/PaddleOCR/ch_PP-OCRv4_rec_infer/ch_PP-OCRv4_rec_infer_rk3588_unquantized.rknn
#将这些文件拷贝到板子上

2.在板子上完成FastDeploy库的编译

FastDeploy库的编译在rk3588板子上进行

2.1 FastDeploy库编译

按照章节1.1.3,在板端下载FastDeploy库。

注意:以下过程都是在板端完成。

在板子上创建一个PaddleOCR文件夹,使用git clone https://github.com/PaddlePaddle/FastDeploy.git指令拉取代码。
这里有个小建议,如果git clone的速度实在是太慢,而你又没有clash等科学软件,可以到码云上拉取代码。

# 此处指令在板端完成
mkdir ~/PaddleOCR && cd ~/PaddleOCR

# 使用GitHub
git clone https://github.com/PaddlePaddle/FastDeploy.git
# 使用码云
git clone https://gitee.com/paddlepaddle/FastDeploy.git

拉去代码在本地后执行如下操作:

cd FastDeploy
# 如果您使用的是develop分支输入以下命令
git checkout develop
# 开始编译
mkdir build && cd build
cmake ..  -DENABLE_ORT_BACKEND=OFF \
	      -DENABLE_RKNPU2_BACKEND=ON \
	      -DENABLE_VISION=ON \
	      -DRKNN2_TARGET_SOC=RK3588 \
          -DCMAKE_INSTALL_PREFIX=${PWD}/fastdeploy-0.0.0

# build if soc is RK3588
make -j8
make install

配置环境变量

为了方便大家配置环境变量,FastDeploy提供了一键配置环境变量的脚本,在运行程序前,你需要执行以下命令

# 临时配置
source ~/PaddleOCR/FastDeploy/build/fastdeploy-0.0.0/fastdeploy_init.sh

# 永久配置
source ~/PaddleOCR/FastDeploy/build/fastdeploy-0.0.0/fastdeploy_init.sh
sudo cp ~/PaddleOCR/FastDeploy/build/fastdeploy-0.0.0/fastdeploy_libs.conf /etc/ld.so.conf.d/
sudo ldconfig

2.2 运行部署示例

首先进入板子上刚刚编译完毕的FastDeploy文件夹下的/FastDeploy/examples/vision/ocr/PP-OCR/rockchip/cpp,创建一个新的文件夹build,回到cpp文件夹下,打开CMakeLists.txt,对include部分进行修改。

PROJECT(infer_demo C CXX)
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)

# 指定下载解压后的fastdeploy库路径
option(FASTDEPLOY_INSTALL_DIR "~/PaddleOCR/FastDeploy/build/fastdeploy-0.0.0")

# 需要修改的地方!!!
# 需要手动在FastDeploy的build,也就是你进行编译FastDeploy时的那个文件夹下面找到FastDeploy.cmake这个文件!
include(~/PaddleOCR/FastDeploy/build/fastdeploy-0.0.0/FastDeploy.cmake)

# 添加FastDeploy依赖头文件
include_directories(${FASTDEPLOY_INCS})

add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
# 添加FastDeploy库依赖
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})

cd到刚刚创建的FastDeploy/examples/vision/ocr/PP-OCR/rockchip/cpp/build文件夹下,进行如下操作:

cd ~/PaddleOCR/FastDeploy/examples/vision/ocr/PP-OCR/rockchip/cpp/build
# 使用编译完成的FastDeploy库编译infer_demo
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-rockchip
make -j

注意,此处编译完会生成一个叫infer_demo的程序,这个就是我们的ocr程序了,将刚刚在PC端编译的模型拷贝到当前文件夹。下载测试图片和字典文件

# 下载图片和字典文件
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/doc/imgs/12.jpg
wget https://gitee.com/paddlepaddle/PaddleOCR/raw/release/2.6/ppocr/utils/ppocr_keys_v1.txt

运行程序:

# CPU推理
./infer_demo ./ch_PP-OCRv4_det_infer/ch_PP-OCRv4_det_infer.onnx \
                          ./ch_ppocr_mobile_v2.0_cls_infer/ch_ppocr_mobile_v2.0_cls_infer.onnx \
                          ./ch_PP-OCRv4_rec_infer/ch_PP-OCRv4_rec_infer.onnx \
                          ./ppocr_keys_v1.txt \
                          ./12.jpg \
                          0
# RKNPU推理
./infer_demo ./ch_PP-OCRv4_det_infer/ch_PP-OCRv4_det_infer_rk3588_unquantized.rknn \
                           ./ch_ppocr_mobile_v2.0_cls_infer/ch_ppocr_mobile_v20_cls_infer_rk3588_unquantized.rknn \
                             ./ch_PP-OCRv4_rec_infer/ch_PP-OCRv4_rec_infer_rk3588_unquantized.rknn \
                              ./ppocr_keys_v1.txt \
                              ./12.jpg \
                              1

运行结果如下:

[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(81)::GetSDKAndDeviceVersion	rknpu2 runtime version: 1.5.1b19 (32afb0e92@2023-07-14T12:46:17)
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(82)::GetSDKAndDeviceVersion	rknpu2 driver version: 0.9.6
index=0, name=x, n_dims=4, dims=[1, 960, 960, 3], n_elems=2764800, size=5529600, fmt=NHWC, type=FP16, qnt_type=AFFINE, zp=0, scale=1.000000, pass_through=0
index=0, name=sigmoid_0.tmp_0, n_dims=4, dims=[1, 1, 960, 960], n_elems=921600, size=1843200, fmt=NCHW, type=FP32, qnt_type=AFFINE, zp=0, scale=1.000000, pass_through=0
[INFO] fastdeploy/runtime/runtime.cc(367)::CreateRKNPU2Backend	Runtime initialized with Backend::RKNPU2 in Device::RKNPU.
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(81)::GetSDKAndDeviceVersion	rknpu2 runtime version: 1.5.1b19 (32afb0e92@2023-07-14T12:46:17)
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(82)::GetSDKAndDeviceVersion	rknpu2 driver version: 0.9.6
index=0, name=x, n_dims=4, dims=[1, 48, 192, 3], n_elems=27648, size=55296, fmt=NHWC, type=FP16, qnt_type=AFFINE, zp=0, scale=1.000000, pass_through=0
index=0, name=softmax_0.tmp_0, n_dims=2, dims=[1, 2, 0, 0], n_elems=2, size=4, fmt=UNDEFINED, type=FP32, qnt_type=AFFINE, zp=0, scale=1.000000, pass_through=0
[INFO] fastdeploy/runtime/runtime.cc(367)::CreateRKNPU2Backend	Runtime initialized with Backend::RKNPU2 in Device::RKNPU.
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(81)::GetSDKAndDeviceVersion	rknpu2 runtime version: 1.5.1b19 (32afb0e92@2023-07-14T12:46:17)
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(82)::GetSDKAndDeviceVersion	rknpu2 driver version: 0.9.6
index=0, name=x, n_dims=4, dims=[1, 48, 320, 3], n_elems=46080, size=92160, fmt=NHWC, type=FP16, qnt_type=AFFINE, zp=0, scale=1.000000, pass_through=0
index=0, name=softmax_5.tmp_0, n_dims=4, dims=[1, 40, 6625, 1], n_elems=265000, size=530000, fmt=NCHW, type=FP32, qnt_type=AFFINE, zp=0, scale=1.000000, pass_through=0
[INFO] fastdeploy/runtime/runtime.cc(367)::CreateRKNPU2Backend	Runtime initialized with Backend::RKNPU2 in Device::RKNPU.
[WARNING] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(420)::InitRKNNTensorMemory	The input tensor type != model's inputs type.The input_type need FP16,but inputs[0].type is UINT8
[WARNING] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(420)::InitRKNNTensorMemory	The input tensor type != model's inputs type.The input_type need FP16,but inputs[0].type is UINT8
[WARNING] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(420)::InitRKNNTensorMemory	The input tensor type != model's inputs type.The input_type need FP16,but inputs[0].type is UINT8
det boxes: [[276,174],[285,173],[285,178],[276,179]]rec text:  rec score:0.000000 cls label: 1 cls score: 0.758301
det boxes: [[43,408],[483,390],[483,431],[44,449]]rec text: 上海斯格威铂尔曼大酒店 rec score:0.888450 cls label: 0 cls score: 1.000000
det boxes: [[186,456],[399,448],[399,480],[186,488]]rec text: 打浦路15号 rec score:0.988769 cls label: 0 cls score: 1.000000
det boxes: [[18,501],[513,485],[514,537],[18,554]]rec text: 绿洲仕格维花园公寓 rec score:0.992676 cls label: 0 cls score: 1.000000
det boxes: [[78,553],[404,541],[404,573],[78,585]]rec text: 打浦路252935号 rec score:0.983398 cls label: 0 cls score: 1.000000

[1] 官方链接:

[FastDeploy RKNPU2 导航文档](FastDeploy/docs/cn/build_and_install/rknpu2.md at develop · PaddlePaddle/FastDeploy (github.com))

[PP-OCRv3 RKNPU2 C++部署示例](FastDeploy/examples/vision/ocr/PP-OCR/rockchip/cpp at develop · PaddlePaddle/FastDeploy (github.com))

[2] Fzq1021 #记录如何在RK3588板子上跑通paddle的OCR模型

Logo

开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!

更多推荐