convert and call your own model through NPU

Here takes yolov3 as an example to demonstrate how to convert your own model, adapt it to our demo, and run it on VIM3

Note

Please review the document carefully before converting the reference document

Prepare

  1. Train your own yolov3 model. The training method and process can refer to the official: Darknet Yolo Page, here we use the officially trained weights based on the coco data set

  2. Prepare SDK, app warehouse, and demo warehouse

Please refer to the SDK, app and demo documents respectively for how to obtain the corresponding code

  1. NPU SDK Usage
  2. Application source code compilation instructions
  3. NPU Prebuilt Demo Usage

Conversion

The conversion is performed under the SDK.

1
$ cd {workspace}/SDK/acuity-toolkit/conversion_scripts

Modify 0_import_model.sh

  1. Modify NAME
1
NAME=mobilenet_tf --> NAME=yolov3
  1. Comment out Tensorflow
1
2
3
4
5
6
7
$convert_tf \
--tf-pb ./model/mobilenet_v1.pb \
--inputs input \
--input-size-list '224,224,3' \
--outputs MobilenetV1/Logits/SpatialSqueeze \
--net-output ${NAME}.json \
--data-output ${NAME}.data

Modify to,

1
2
3
4
5
6
7
#$convert_tf \
# --tf-pb ./model/mobilenet_v1.pb \
# --inputs input \
# --input-size-list '224,224,3' \
# --outputs MobilenetV1/Logits/SpatialSqueeze \
# --net-output ${NAME}.json \
# --data-output ${NAME}.data
  1. Comment Darknet
1
2
3
4
5
#$convert_darknet \
# --net-input xxx.cfg \
# --weight-input xxx.weights \
# --net-output ${NAME}.json \
# --data-output ${NAME}.data

Modify to,

1
2
3
4
5
6
$convert_darknet \
--net-input path/to/yolov3.cfg \
--weight-input path/to/yolov3.weights \
--net-output ${NAME}.json \
--data-output ${NAME}.data

Modify 1_quantize_model.sh

  1. Modify NAME
1
NAME=mobilenet_tf --> NAME=yolov3
  1. Modify regression parameters
1
--channel-mean-value '128 128 128 128' \

Modify to,

1
--channel-mean-value '0 0 0 256' \
  1. Modify validation_tf.txt

Replace the image inside

1
2
$ cat ./data/validation_tf.txt
./space_shuttle_224.jpg, 813

Modify to,

1
path/to/416x416.jpg

The picture resolution here is the same as the configuration in the yolo cfg file

  1. Modify quant type
1
--quantized-dtype asymmetric_affine-u8 \

Modify to,

1
--quantized-dtype dynamic_fixed_point-i8 \

Modify 2_export_case_code.sh

  1. Modify NAME
1
NAME=mobilenet_tf --> NAME=yolov3
  1. Modify regression parameters
1
--channel-mean-value '128 128 128 128' \

Modify to,

1
--channel-mean-value '0 0 0 256' \
  1. Modify the RGB channel order

default channel is RGB

1
--reorder-channel '0 1 2' \

Modified to BGR

1
--reorder-channel '2 1 0' \
  1. Specified board model

VIM3

1
--optimize VIPNANOQI_PID0X88  \

VIM3L

1
--optimize VIPNANOQI_PID0X99  \

Compile And Get The Case Code

  1. Compile
1
$ bash 0_import_model.sh && bash 1_quantize_model.sh  && bash 2_export_case_code.sh
  1. case code

The converted code is in the nbg_unify_yolov3 directory

1
2
$ ls {workspace}/SDK/acuity-toolkit/conversion_scripts/nbg_unify_yolov3
BUILD main.c makefile.linux nbg_meta.json vnn_global.h vnn_post_process.c vnn_post_process.h vnn_pre_process.c vnn_pre_process.h vnn_yolov3.c vnn_yolov3.h yolov3.nb yolov3.vcxproj

Compile

This part of the code is carried out in the aml_npu_app warehouse. Enter the directory of detect_yolo_v3

1
2
3
$ cd {workspace}/aml_npu_app/detect_library/model_code/detect_yolo_v3
$ ls
build_vx.sh include Makefile makefile.linux nn_data vnn_yolov3.c yolo_v3.c yolov3_process.c

Replace VNN File

  1. Replace vnn_yolov3.h, vnn_post_process.h, vnn_pre_process.h generated by SDK
1
2
3
$ cp {workspace}/SDK/acuity-toolkit/conversion_scripts/nbg_unify_yolov3/vnn_yolov3.h {workspace}/aml_npu_app/detect_library/model_code/detect_yolo_v3/include/vnn_yolov3.h
$ cp {workspace}/SDK/acuity-toolkit/conversion_scripts/nbg_unify_yolov3/vnn_post_process.h {workspace}/aml_npu_app/detect_library/model_code/detect_yolo_v3/include/vnn_post_process.h
$ cp {workspace}/SDK/acuity-toolkit/conversion_scripts/nbg_unify_yolov3/vnn_pre_process.h {workspace}/aml_npu_app/detect_library/model_code/detect_yolo_v3/include/vnn_pre_process.h
  1. Replace vnn_yolov3.c generated by SDK
1
$ cp {workspace}/SDK/acuity-toolkit/conversion_scripts/nbg_unify_yolov3/vnn_yolov3.c {workspace}/aml_npu_app/detect_library/model_code/detect_yolo_v3/vnn_yolov3.c

Modify yolov3_process.c

  1. Modify the class array
1
2
static char *coco_names[] = {"person","bicycle","car","motorbike","aeroplane","bus","train","truck","boat","traffic light","fire hydrant","stop sign","parking meter","bench","bird","cat","dog","horse","sheep","cow","elephant","bear","zebra","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sports ball","kite","baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","wine glass","cup","fork","knife","spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza","donut","cake","chair","sofa","pottedplant","bed","diningtable","toilet","tvmonitor","laptop","mouse","remote","keyboard","cell phone","microwave","oven","toaster","sink","refrigerator","book","clock","vase","scissors","teddy bear","hair drier","toothbrush"};

According to your training data set settings, if it is a coco data set, there is no need to modify it.

  1. Modify yolo_v3_post_process_onescale

Modify num_class

1
int num_class = 80;

The num_class here is the same as the number of classes in the training set

  1. Modified post-processing function yolov3_postprocess

Modify num_class and size[3]

1
2
int num_class = 80;
int size[3]={nn_width/32, nn_height/32,85*3};

The num_class here is the same as the number of classes in the training set
The size[2] here is equal to (num_class + 5) * 3

Compile

Use the build_vx.sh script to compile the yolov3 library,

1
2
$ cd {workspace}/aml_npu_app/detect_library/model_code/detect_yolo_v3
$ ./build_vx.sh

The generated library is in the bin_r directory

1
2
$ ls {workspace}/aml_npu_app/detect_library/model_code/detect_yolo_v3/bin_r
libnn_yolo_v3.so vnn_yolov3.o yolo_v3.o yolov3_process.o

Run

Replace

  1. Replace yolov3 library
1
$ cp {workspace}/aml_npu_app/detect_library/model_code/detect_yolo_v3/bin_r/libnn_yolo_v3.so {workspace}/aml_npu_demo_binaries/detect_demo_picture/lib/libnn_yolo_v3.so
  1. Replace nb file

VIM3

1
$ cp {workspace}/SDK/acuity-toolkit/conversion_scripts/nbg_unify_yolov3/yolov3.nb {workspace}/aml_npu_demo_binaries/detect_demo_picture/nn_data/yolov3_88.nb

VIM3L

1
$ cp {workspace}/SDK/acuity-toolkit/conversion_scripts/nbg_unify_yolov3/yolov3.nb {workspace}/aml_npu_demo_binaries/detect_demo_picture/nn_data/yolov3_99.nb

Run with board

How to run the replaced aml_npu_demo_binaries on the board, please refer to

NPU Prebuilt Demo Usage