YOLO11 Introduction
YOLO11 is the next-generation YOLO model iteration for detection/segmentation/pose tasks in the Ultralytics family. It continues the YOLO series' design philosophy of "single-stage, end-to-end, real-time" and improves the accuracy-speed tradeoff through improved network architecture, feature fusion, and training/inference strategies. It provides different scales from n/s/m/l/x to adapt deployment needs from edge to server.
💡 Tip
In YOLO-related model naming, letters like n / s / m / l / x typically indicate model size, meaning different network width/depth configurations that trade off parameters, computation, and accuracy/speed.
Common meanings:
- n = nano: Smallest, fastest, relatively lower accuracy, suitable for edge/low compute
- s = small: Small
- m = medium: Medium
- l = large: Large
- x = xlarge / extra-large: Largest, slowest, typically highest accuracy
Objective
We will deploy the model to the LCSC-TaishanPi-3M-RK3576 board and demonstrate using the official Demo from rknn_model_zoo.
Environment Preparation
- Host Environment: Ubuntu22.04 (x86)
- Development Board: LCSC-TaishanPi-3M-RK3576
- Data Cable: Connect PC and development board for ADB file transfer.
Install miniforge3
To prevent Python environment issues caused by different environments on a single host, we use miniforge3 for management.
Install miniforge3:
# Download miniforge3 installation script
wget -c https://mirrors.bfsu.edu.cn/github-release/conda-forge/miniforge/LatestRelease/Miniforge3-Linux-x86_64.sh
# Run the installation script
bash Miniforge3-Linux-x86_64.sh
# 1. Press Enter to continue
# 2. Use the down arrow to scroll through the agreement
# 3. Enter yes at the end
# 4. When prompted "Proceed with initialization?", enter yes2
3
4
5
6
7
8
9
10
You can check https://mirrors.bfsu.edu.cn/github-release/conda-forge/miniforge/LatestRelease/ to find the current latest
.shfilename.
Initialize the conda environment variable:
source ~/miniforge3/bin/activateAfter success, a
(base)will appear at the beginning of the command line.
Create rknn-toolkit2 Environment
Create and activate a Conda environment: YOLO11-RKNN-Toolkit2 (Python 3.10 is recommended)
This will be needed later when converting the ONNX model to RKNN model.
# Create environment
conda create -n YOLO11-RKNN-Toolkit2 python=3.10
# When prompted "Proceed ([y]/n)?"
# Enter y2
3
4
5
Activate the Conda environment:
conda activate YOLO11-RKNN-Toolkit2
# After activation, (YOLO11-RKNN-Toolkit2) will appear at the beginning of the command line2
3
Install dependencies:
# Install rknn-toolkit2
pip install rknn-toolkit2 -i https://mirrors.aliyun.com/pypi/simple
# Install onnx==1.18.0
pip install onnx==1.18.0 -i https://mirrors.aliyun.com/pypi/simple2
3
4
5
After installation, exit the YOLO11-RKNN-Toolkit2 environment:
conda deactivateCreate yolo11 Environment
Create and activate a Conda environment: Tspi3-YOLO11 (Python 3.10 is recommended)
# Create environment
conda create -n Tspi3-YOLO11 python=3.10
# When prompted "Proceed ([y]/n)?"
# Enter y2
3
4
5
Activate the Conda environment:
conda activate Tspi3-YOLO11
# After activation, (TaishanPi3-YOLO11) will appear at the beginning of the command line2
3
Install dependencies for YOLO11:
pip install ultralytics onnx onnxscript -i https://mirrors.aliyun.com/pypi/simpleTest:
(Tspi3-YOLO11) lipeng@host:~/workspace$ yolo -v
8.3.2482
Model Conversion
Next, we need to execute three important steps:
- Pull the .pt file.
- Use the Rockchip-optimized yolo11 project to export ONNX model.
- Use rknn-toolkit2 to convert the ONNX model to a hardware-accelerated RKNN model.
Pull .pt File
The .pt file is the trained YOLO11 model weights (parameters). Only by obtaining this file can we perform object detection.
Otherwise, even with YOLO11 code, it's just an empty shell and cannot complete detection.
At https://github.com/ultralytics/assets/releases/, Ultralytics provides official .pt weight files. We just need to download what we need:
wget https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n-pose.ptExport ONNX Model
Next, we need to pull the Rockchip officially modified ultralytics_yolo11 project, which has been specifically adapted for RKNPU:
- Modified output structure, removed post-processing. (Post-processing results are not friendly to quantization)
- DFL structure has poor performance on NPU, moved to post-processing stage outside the model. This operation can improve inference performance in most cases.
- Added sum of confidence scores to model output branches for accelerated threshold filtering during post-processing.
Details: https://github.com/airockchip/ultralytics_yolo11/blob/main/RKOPT_README.zh-CN.md
Continue using the Tspi3-YOLO11 environment:
conda activate Tspi3-YOLO11Pull the airockchip/ultralytics_yolo11 project:
git clone https://github.com/airockchip/ultralytics_yolo11.gitAfter pulling, navigate to the directory:
cd ultralytics_yolo11Modify the model in the ultralytics_yolo11/ultralytics/cfg/default.yaml file to the absolute path of the .pt file you just pulled:
Fill in according to your
.ptfile path.
# Train settings -------------------------------------------------------------------------------------------------------
-model: yolo11n.pt # (str, optional) path to model file, i.e. yolo11n.pt, yolo11n.yaml
+model: /home/lipeng/workspace/yolo11/yolo11n-pose.pt # (str, optional) path to model file, i.e. yolo11n.pt, yolo11n.yaml
data: # (str, optional) path to data file, i.e. coco8.yaml
epochs: 100 # (int) number of epochs to train for
time: # (float, optional) number of hours to train for, overrides epochs if supplied2
3
4
5
6
Modify the ultralytics_yolo11/ultralytics/engine/exporter.py file to add the dynamo=False parameter, forcing the use of the legacy TorchScript-based ONNX exporter to prevent errors when exporting pose ONNX models with newer versions of PyTorch:
--- a/ultralytics/engine/exporter.py
+++ b/ultralytics/engine/exporter.py
@@ -413,6 +413,7 @@ class Exporter:
f,
verbose=False,
opset_version=12,
+ dynamo=False, # Use legacy TorchScript-based exporter for compatibility
do_constant_folding=True, # WARNING: DNN inference with torch>=1.12 may require do_constant_folding=False
input_names=['images'])2
3
4
5
6
7
8
9
10
Set the export path to the current directory:
export PYTHONPATH=./Use the script to start exporting the ONNX model:
python ./ultralytics/engine/exporter.pyONNX to RKNN
Exit the Tspi3-YOLO11 environment:
conda deactivateEnter the YOLO11-RKNN-Toolkit2 environment:
conda activate YOLO11-RKNN-Toolkit2Next, we will use the conversion script from rknn_model_zoo to convert ONNX to RKNN model. Pull the project:
git clone https://github.com/airockchip/rknn_model_zoo.gitNavigate to the rknn_model_zoo/examples/yolov8_pose/python directory:
Special Note
Since YOLO11 does not have a dedicated pose example, we directly use the YOLOv8 example.
cd rknn_model_zoo/examples/yolov8_pose/pythonModify the rknn_model_zoo/examples/yolov8_pose/python/convert.py script to disable quantization, because we exported the ONNX model has different layer structure, so we use standard quantization:
--- a/examples/yolov8_pose/python/convert.py
+++ b/examples/yolov8_pose/python/convert.py
@@ -59,23 +59,19 @@ if __name__ == '__main__':
if platform in ["rv1109","rv1126","rk1808"] :
ret = rknn.build(do_quantization=do_quant, dataset=DATASET_PATH, auto_hybrid_quant=True)
else:
- if do_quant:
- rknn.hybrid_quantization_step1(
- dataset=DATASET_PATH,
- proposal= False,
- custom_hybrid=[['/model.22/cv4.0/cv4.0.0/act/Mul_output_0','/model.22/Concat_6_output_0'],
- ['/model.22/cv4.1/cv4.1.0/act/Mul_output_0','/model.22/Concat_6_output_0'],
- ['/model.22/cv4.2/cv4.2.0/act/Mul_output_0','/model.22/Concat_6_output_0']]
- )
-
- model_name=os.path.basename(model_path).replace('.onnx','')
- rknn.hybrid_quantization_step2(
- model_input = model_name+".model", # Model file generated in step 1
- data_input= model_name+".data", # Config file generated in step 1
- model_quantization_cfg=model_name+".quantization.cfg" # Quantization config file generated in step 1
- )
- else:
- ret = rknn.build(do_quantization=do_quant, dataset=DATASET_PATH)
+ # Standard quantization (default, fastest)
+ ret = rknn.build(do_quantization=do_quant, dataset=DATASET_PATH)
+
+ # Uncomment below to enable AUTO hybrid quantization (automatically finds sensitive layers):
+ # if do_quant:
+ # rknn.hybrid_quantization_step1(dataset=DATASET_PATH, proposal=True)
+ # model_name = os.path.basename(model_path).replace('.onnx','')
+ # # Check generated .quantization.cfg, adjust if needed, then run:
+ # rknn.hybrid_quantization_step2(
+ # model_input=model_name+".model",
+ # data_input=model_name+".data",
+ # model_quantization_cfg=model_name+".quantization.cfg"
+ # )
if ret != 0:
print('Build model failed!')
exit(ret)2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Run the rknn_model_zoo/examples/yolov8_pose/python/convert.py script to convert to RKNN model:
# Syntax: python3 convert.py onnx_model_path [platform] [dtype] [output_rknn_path]
## platform: [rk3562, rk3566, rk3568, rk3576, rk3588, rv1126b, rv1109, rv1126, rk1808]
## dtype: [i8, fp] for [rk3562, rk3566, rk3568, rk3576, rk3588, rv1126b]
## dtype: [u8, fp] for [rv1109, rv1126, rk1808]
python convert.py /home/lipeng/workspace/yolo11/yolo11n-pose.onnx rk3576 i82
3
4
5
6
platformoptions includerk3562,rk3566,rk3568,rk3576,rk3588,rv1126b,rv1109,rv1126,rk1808
dtype:
- Select
i8orfpfor platforms:rk3562,rk3566,rk3568,rk3576,rk3588,rv1126b- Select
u8orfpfor platforms:rv1109,rv1126,rk1808
After successful execution, a .rknn model file will be generated in the rknn_model_zoo/examples/yolov8_pose/model directory.
Demo Compilation
Overview
The official Rockchip open-source project uses C++ written demos. You can compile the sample code directly by running:
rknn_model_zoo/build-linux.shrknn_model_zoo/build-android.sh
These two scripts (replacing cross-compilation paths with actual paths) compile the sample code directly.
In the deploy directory, a install/demo_Linux_aarch64 or install/demo_Android_aarch64 folder will be generated, containing imgenc, llm, demo, and lib folders.
Exit Environment
conda deactivateWhen (base) appears at the beginning of the command line, it's done.
Install Cross-Compiler
We need to compile the Demo on the PC to generate files and run them on the LCSC-TaishanPi-3M-RK3576 board. So we directly use apt to install aarch64-linux-gnu:
sudo apt update && \
sudo apt install -y cmake make gcc-aarch64-linux-gnu g++-aarch64-linux-gnu2
Modify Demo Source Code
In the original YOLOv8 pose detection Demo, the code uses float16 to read keypoint data. The output layer is INT8 type, so INT8 dequantization must be used.
Modify the rknn_model_zoo/examples/yolov8_pose/cpp/postprocess.cc file:
diff --git a/examples/yolov8_pose/cpp/postprocess.cc b/examples/yolov8_pose/cpp/postprocess.cc
index 8d11271..d45cc8f 100644
--- a/examples/yolov8_pose/cpp/postprocess.cc
+++ b/examples/yolov8_pose/cpp/postprocess.cc
@@ -470,30 +470,35 @@ int post_process(rknn_app_context_t *app_ctx, void *outputs, letterbox_t *letter
float h = filterBoxes[n * 5 + 3];
int keypoints_index = (int)filterBoxes[n * 5 + 4];
+ // Calculate total anchor points dynamically from output tensor
+ int total_anchors = index; // Total anchor points from all scales
+
for (int j = 0; j < 17; ++j) {
if (app_ctx->is_quant) {
#ifdef RKNPU1
- od_results->results[last_count].keypoints[j][0] = (deqnt_affine_u8_to_f32(((uint8_t *)_outputs[3].buf)[j * 3 * 8400 + 0 * 8400 + keypoints_index],
+ od_results->results[last_count].keypoints[j][0] = (deqnt_affine_u8_to_f32(((uint8_t *)_outputs[3].buf)[(j * 3 + 0) * total_anchors + keypoints_index],
app_ctx->output_attrs[3].zp, app_ctx->output_attrs[3].scale)- letter_box->x_pad)/ letter_box->scale;
- od_results->results[last_count].keypoints[j][1] = (deqnt_affine_u8_to_f32(((uint8_t *)_outputs[3].buf)[j * 3 * 8400 + 1 * 8400 + keypoints_index],
+ od_results->results[last_count].keypoints[j][1] = (deqnt_affine_u8_to_f32(((uint8_t *)_outputs[3].buf)[(j * 3 + 1) * total_anchors + keypoints_index],
app_ctx->output_attrs[3].zp, app_ctx->output_attrs[3].scale)- letter_box->y_pad)/ letter_box->scale;
- od_results->results[last_count].keypoints[j][2] = deqnt_affine_u8_to_f32(((uint8_t *)_outputs[3].buf)[j * 3 * 8400 + 2 * 8400 + keypoints_index],
+ od_results->results[last_count].keypoints[j][2] = deqnt_affine_u8_to_f32(((uint8_t *)_outputs[3].buf)[(j * 3 + 2) * total_anchors + keypoints_index],
app_ctx->output_attrs[3].zp, app_ctx->output_attrs[3].scale);
#else
- od_results->results[last_count].keypoints[j][0] = ((float)((rknpu2::float16 *)_outputs[3].buf)[j*3*8400+0*8400+keypoints_index]
- - letter_box->x_pad)/ letter_box->scale;
- od_results->results[last_count].keypoints[j][1] = ((float)((rknpu2::float16 *)_outputs[3].buf)[j*3*8400+1*8400+keypoints_index]
- - letter_box->y_pad)/ letter_box->scale;
- od_results->results[last_count].keypoints[j][2] = (float)((rknpu2::float16 *)_outputs[3].buf)[j*3*8400+2*8400+keypoints_index];
+ // RKNPU2: INT8 quantized output, use deqnt_affine_to_f32
+ od_results->results[last_count].keypoints[j][0] = (deqnt_affine_to_f32(((int8_t *)_outputs[3].buf)[(j*3+0)*total_anchors+keypoints_index],
+ app_ctx->output_attrs[3].zp, app_ctx->output_attrs[3].scale) - letter_box->x_pad) / letter_box->scale;
+ od_results->results[last_count].keypoints[j][1] = (deqnt_affine_to_f32(((int8_t *)_outputs[3].buf)[(j*3+1)*total_anchors+keypoints_index],
+ app_ctx->output_attrs[3].zp, app_ctx->output_attrs[3].scale) - letter_box->y_pad) / letter_box->scale;
+ od_results->results[last_count].keypoints[j][2] = deqnt_affine_to_f32(((int8_t *)_outputs[3].buf)[(j*3+2)*total_anchors+keypoints_index],
+ app_ctx->output_attrs[3].zp, app_ctx->output_attrs[3].scale);
#endif
}
else
{
- od_results->results[last_count].keypoints[j][0] = (((float *)_outputs[3].buf)[j*3*8400+0*8400+keypoints_index]
+ od_results->results[last_count].keypoints[j][0] = (((float *)_outputs[3].buf)[(j*3+0)*total_anchors+keypoints_index]
- letter_box->x_pad)/ letter_box->scale;
- od_results->results[last_count].keypoints[j][1] = (((float *)_outputs[3].buf)[j*3*8400+1*8400+keypoints_index]
+ od_results->results[last_count].keypoints[j][1] = (((float *)_outputs[3].buf)[(j*3+1)*total_anchors+keypoints_index]
- letter_box->y_pad)/ letter_box->scale;
- od_results->results[last_count].keypoints[j][2] = ((float *)_outputs[3].buf)[j*3*8400+2*8400+keypoints_index];
+ od_results->results[last_count].keypoints[j][2] = ((float *)_outputs[3].buf)[(j*3+2)*total_anchors+keypoints_index];
}
}2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
// Copyright (c) 2024 by Rockchip Electronics Co., Ltd. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#include "yolov8-pose.h"
#include <math.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>
#ifndef RKNPU1
#include <Float16.h>
#endif
#include <iostream>
#include <cmath>
#include <algorithm>
#include <set>
#include <vector>
#define LABEL_NALE_TXT_PATH "./model/yolov8_pose_labels_list.txt"
static char *labels[OBJ_CLASS_NUM];
inline static int clamp(float val, int min, int max) { return val > min ? (val < max ? val : max) : min; }
static char *readLine(FILE *fp, char *buffer, int *len) {
int ch;
int i = 0;
size_t buff_len = 0;
buffer = (char *)malloc(buff_len + 1);
if (!buffer)
return NULL; // Out of memory
while ((ch = fgetc(fp)) != '\n' && ch != EOF) {
buff_len++;
void *tmp = realloc(buffer, buff_len + 1);
if (tmp == NULL) {
free(buffer);
return NULL; // Out of memory
}
buffer = (char *)tmp;
buffer[i] = (char)ch;
i++;
}
buffer[i] = '\0';
*len = buff_len;
// Detect end
if (ch == EOF && (i == 0 || ferror(fp))) {
free(buffer);
return NULL;
}
return buffer;
}
static int readLines(const char *fileName, char *lines[], int max_line) {
FILE *file = fopen(fileName, "r");
char *s;
int i = 0;
int n = 0;
if (file == NULL) {
printf("Open %s fail!\n", fileName);
return -1;
}
while ((s = readLine(file, s, &n)) != NULL) {
lines[i++] = s;
if (i >= max_line)
break;
}
fclose(file);
return i;
}
static int loadLabelName(const char *locationFilename, char *label[]) {
printf("load lable %s\n", locationFilename);
readLines(locationFilename, label, OBJ_CLASS_NUM);
return 0;
}
static float CalculateOverlap(float xmin0, float ymin0, float xmax0, float ymax0, float xmin1, float ymin1, float xmax1,
float ymax1)
{
float w = fmax(0.f, fmin(xmax0, xmax1) - fmax(xmin0, xmin1) + 1.0);
float h = fmax(0.f, fmin(ymax0, ymax1) - fmax(ymin0, ymin1) + 1.0);
float i = w * h;
float u = (xmax0 - xmin0 + 1.0) * (ymax0 - ymin0 + 1.0) + (xmax1 - xmin1 + 1.0) * (ymax1 - ymin1 + 1.0) - i;
return u <= 0.f ? 0.f : (i / u);
}
static int nms(int validCount, std::vector<float> &outputLocations, std::vector<int> classIds, std::vector<int> &order,
int filterId, float threshold)
{
for (int i = 0; i < validCount; ++i)
{
int n = order[i];
if (n == -1 || classIds[n] != filterId)
{
continue;
}
for (int j = i + 1; j < validCount; ++j)
{
int m = order[j];
if (m == -1 || classIds[m] != filterId)
{
continue;
}
float xmin0 = outputLocations[n * 5 + 0];
float ymin0 = outputLocations[n * 5 + 1];
float xmax0 = outputLocations[n * 5 + 0] + outputLocations[n * 5 + 2];
float ymax0 = outputLocations[n * 5 + 1] + outputLocations[n * 5 + 3];
float xmin1 = outputLocations[m * 5 + 0];
float ymin1 = outputLocations[m * 5 + 1];
float xmax1 = outputLocations[m * 5 + 0] + outputLocations[m * 5 + 2];
float ymax1 = outputLocations[m * 5 + 1] + outputLocations[m * 5 + 3];
float iou = CalculateOverlap(xmin0, ymin0, xmax0, ymax0, xmin1, ymin1, xmax1, ymax1);
if (iou > threshold)
{
order[j] = -1;
}
}
}
return 0;
}
static int quick_sort_indice_inverse(std::vector<float> &input, int left, int right, std::vector<int> &indices) {
float key;
int key_index;
int low = left;
int high = right;
if (left < right) {
key_index = indices[left];
key = input[left];
while (low < high) {
while (low < high && input[high] <= key) {
high--;
}
input[low] = input[high];
indices[low] = indices[high];
while (low < high && input[low] >= key) {
low++;
}
input[high] = input[low];
indices[high] = indices[low];
}
input[low] = key;
indices[low] = key_index;
quick_sort_indice_inverse(input, left, low - 1, indices);
quick_sort_indice_inverse(input, low + 1, right, indices);
}
return low;
}
static float sigmoid(float x) {
return 1.0 / (1.0 + expf(-x));
}
static float unsigmoid(float y) {
return -1.0 * logf((1.0 / y) - 1.0);
}
inline static int32_t __clip(float val, float min, float max) {
float f = val <= min ? min : (val >= max ? max : val);
return f;
}
static int8_t qnt_f32_to_affine(float f32, int32_t zp, float scale) {
float dst_val = (f32 / scale) + zp;
int8_t res = (int8_t)__clip(dst_val, -128, 127);
return res;
}
static uint8_t qnt_f32_to_affine_u8(float f32, int32_t zp, float scale) {
float dst_val = (f32 / scale) + zp;
uint8_t res = (uint8_t)__clip(dst_val, 0, 255);
return res;
}
static float deqnt_affine_to_f32(int8_t qnt, int32_t zp, float scale) {
return ((float)qnt - (float)zp) * scale;
}
static float deqnt_affine_u8_to_f32(uint8_t qnt, int32_t zp, float scale) {
return ((float)qnt - (float)zp) * scale;
}
void softmax(float *input, int size) {
float max_val = input[0];
for (int i = 1; i < size; ++i) {
if (input[i] > max_val) {
max_val = input[i];
}
}
float sum_exp = 0.0;
for (int i = 0; i < size; ++i) {
sum_exp += expf(input[i] - max_val);
}
for (int i = 0; i < size; ++i) {
input[i] = expf(input[i] - max_val) / sum_exp;
}
}
static int process_i8(int8_t *input, int grid_h, int grid_w, int stride,
std::vector<float> &boxes, std::vector<float> &boxScores, std::vector<int> &classId, float threshold,
int32_t zp, float scale, int index) {
int input_loc_len = 64;
int tensor_len = input_loc_len + OBJ_CLASS_NUM;
int validCount = 0;
int8_t thres_i8 = qnt_f32_to_affine(unsigmoid(threshold), zp, scale);
for (int h = 0; h < grid_h; h++) {
for (int w = 0; w < grid_w; w++) {
for (int a = 0; a < OBJ_CLASS_NUM; a++) {
if(input[(input_loc_len + a)*grid_w * grid_h + h * grid_w + w ] >= thres_i8) { //[1,tensor_len,grid_h,grid_w]
float box_conf_f32 = sigmoid(deqnt_affine_to_f32(input[(input_loc_len + a) * grid_w * grid_h + h * grid_w + w ],
zp, scale));
float loc[input_loc_len];
for (int i = 0; i < input_loc_len; ++i) {
loc[i] = deqnt_affine_to_f32(input[i * grid_w * grid_h + h * grid_w + w], zp, scale);
}
for (int i = 0; i < input_loc_len / 16; ++i) {
softmax(&loc[i * 16], 16);
}
float xywh_[4] = {0, 0, 0, 0};
float xywh[4] = {0, 0, 0, 0};
for (int dfl = 0; dfl < 16; ++dfl) {
xywh_[0] += loc[dfl] * dfl;
xywh_[1] += loc[1 * 16 + dfl] * dfl;
xywh_[2] += loc[2 * 16 + dfl] * dfl;
xywh_[3] += loc[3 * 16 + dfl] * dfl;
}
xywh_[0]=(w+0.5)-xywh_[0];
xywh_[1]=(h+0.5)-xywh_[1];
xywh_[2]=(w+0.5)+xywh_[2];
xywh_[3]=(h+0.5)+xywh_[3];
xywh[0]=((xywh_[0]+xywh_[2])/2)*stride;
xywh[1]=((xywh_[1]+xywh_[3])/2)*stride;
xywh[2]=(xywh_[2]-xywh_[0])*stride;
xywh[3]=(xywh_[3]-xywh_[1])*stride;
xywh[0]=xywh[0]-xywh[2]/2;
xywh[1]=xywh[1]-xywh[3]/2;
boxes.push_back(xywh[0]);//x
boxes.push_back(xywh[1]);//y
boxes.push_back(xywh[2]);//w
boxes.push_back(xywh[3]);//h
boxes.push_back(float(index + (h * grid_w) + w));//keypoints index
boxScores.push_back(box_conf_f32);
classId.push_back(a);
validCount++;
}
}
}
}
return validCount;
}
static int process_u8(uint8_t *input, int grid_h, int grid_w, int stride,
std::vector<float> &boxes, std::vector<float> &boxScores, std::vector<int> &classId, float threshold,
int32_t zp, float scale, int index) {
int input_loc_len = 64;
int tensor_len = input_loc_len + OBJ_CLASS_NUM;
int validCount = 0;
uint8_t thres_i8 = qnt_f32_to_affine_u8(unsigmoid(threshold), zp, scale);
for (int h = 0; h < grid_h; h++) {
for (int w = 0; w < grid_w; w++) {
for (int a = 0; a < OBJ_CLASS_NUM; a++) {
if(input[(input_loc_len + a)*grid_w * grid_h + h * grid_w + w ] >= thres_i8) { //[1,tensor_len,grid_h,grid_w]
float box_conf_f32 = sigmoid(deqnt_affine_u8_to_f32(input[(input_loc_len + a) * grid_w * grid_h + h * grid_w + w ],
zp, scale));
float loc[input_loc_len];
for (int i = 0; i < input_loc_len; ++i) {
loc[i] = deqnt_affine_u8_to_f32(input[i * grid_w * grid_h + h * grid_w + w], zp, scale);
}
for (int i = 0; i < input_loc_len / 16; ++i) {
softmax(&loc[i * 16], 16);
}
float xywh_[4] = {0, 0, 0, 0};
float xywh[4] = {0, 0, 0, 0};
for (int dfl = 0; dfl < 16; ++dfl) {
xywh_[0] += loc[dfl] * dfl;
xywh_[1] += loc[1 * 16 + dfl] * dfl;
xywh_[2] += loc[2 * 16 + dfl] * dfl;
xywh_[3] += loc[3 * 16 + dfl] * dfl;
}
xywh_[0]=(w+0.5)-xywh_[0];
xywh_[1]=(h+0.5)-xywh_[1];
xywh_[2]=(w+0.5)+xywh_[2];
xywh_[3]=(h+0.5)+xywh_[3];
xywh[0]=((xywh_[0]+xywh_[2])/2)*stride;
xywh[1]=((xywh_[1]+xywh_[3])/2)*stride;
xywh[2]=(xywh_[2]-xywh_[0])*stride;
xywh[3]=(xywh_[3]-xywh_[1])*stride;
xywh[0]=xywh[0]-xywh[2]/2;
xywh[1]=xywh[1]-xywh[3]/2;
boxes.push_back(xywh[0]);//x
boxes.push_back(xywh[1]);//y
boxes.push_back(xywh[2]);//w
boxes.push_back(xywh[3]);//h
boxes.push_back(float(index + (h * grid_w) + w));//keypoints index
boxScores.push_back(box_conf_f32);
classId.push_back(a);
validCount++;
}
}
}
}
return validCount;
}
static int process_fp32(float *input, int grid_h, int grid_w, int stride,
std::vector<float> &boxes, std::vector<float> &boxScores, std::vector<int> &classId, float threshold,
int32_t zp, float scale, int index) {
int input_loc_len = 64;
int tensor_len = input_loc_len + OBJ_CLASS_NUM;
int validCount = 0;
float thres_fp = unsigmoid(threshold);
for (int h = 0; h < grid_h; h++) {
for (int w = 0; w < grid_w; w++) {
for (int a = 0; a < OBJ_CLASS_NUM; a++) {
if(input[(input_loc_len + a)*grid_w * grid_h + h * grid_w + w ] >= thres_fp) { //[1,tensor_len,grid_h,grid_w]
float box_conf_f32 = sigmoid(input[(input_loc_len + a) * grid_w * grid_h + h * grid_w + w ]);
float loc[input_loc_len];
for (int i = 0; i < input_loc_len; ++i) {
loc[i] = input[i * grid_w * grid_h + h * grid_w + w];
}
for (int i = 0; i < input_loc_len / 16; ++i) {
softmax(&loc[i * 16], 16);
}
float xywh_[4] = {0, 0, 0, 0};
float xywh[4] = {0, 0, 0, 0};
for (int dfl = 0; dfl < 16; ++dfl) {
xywh_[0] += loc[dfl] * dfl;
xywh_[1] += loc[1 * 16 + dfl] * dfl;
xywh_[2] += loc[2 * 16 + dfl] * dfl;
xywh_[3] += loc[3 * 16 + dfl] * dfl;
}
xywh_[0]=(w+0.5)-xywh_[0];
xywh_[1]=(h+0.5)-xywh_[1];
xywh_[2]=(w+0.5)+xywh_[2];
xywh_[3]=(h+0.5)+xywh_[3];
xywh[0]=((xywh_[0]+xywh_[2])/2)*stride;
xywh[1]=((xywh_[1]+xywh_[3])/2)*stride;
xywh[2]=(xywh_[2]-xywh_[0])*stride;
xywh[3]=(xywh_[3]-xywh_[1])*stride;
xywh[0]=xywh[0]-xywh[2]/2;
xywh[1]=xywh[1]-xywh[3]/2;
boxes.push_back(xywh[0]);//x
boxes.push_back(xywh[1]);//y
boxes.push_back(xywh[2]);//w
boxes.push_back(xywh[3]);//h
boxes.push_back(float(index + (h * grid_w) + w));//keypoints index
boxScores.push_back(box_conf_f32);
classId.push_back(a);
validCount++;
}
}
}
}
return validCount;
}
int post_process(rknn_app_context_t *app_ctx, void *outputs, letterbox_t *letter_box, float conf_threshold, float nms_threshold,
object_detect_result_list *od_results) {
#if defined(RV1106_1103)
rknn_tensor_mem **_outputs = (rknn_tensor_mem **)outputs;
#else
rknn_output *_outputs = (rknn_output *)outputs;
#endif
std::vector<float> filterBoxes;
std::vector<float> objProbs;
std::vector<int> classId;
int validCount = 0;
int stride = 0;
int grid_h = 0;
int grid_w = 0;
int model_in_w = app_ctx->model_width;
int model_in_h = app_ctx->model_height;
memset(od_results, 0, sizeof(object_detect_result_list));
int index = 0;
#ifdef RKNPU1
for (int i = 0; i < 3; i++) {
grid_h = app_ctx->output_attrs[i].dims[1];
grid_w = app_ctx->output_attrs[i].dims[0];
stride = model_in_h / grid_h;
if (app_ctx->is_quant) {
validCount += process_u8((uint8_t *)_outputs[i].buf, grid_h, grid_w, stride, filterBoxes, objProbs,
classId, conf_threshold, app_ctx->output_attrs[i].zp, app_ctx->output_attrs[i].scale, index);
}
else
{
validCount += process_fp32((float *)_outputs[i].buf, grid_h, grid_w, stride, filterBoxes, objProbs,
classId, conf_threshold, app_ctx->output_attrs[i].zp, app_ctx->output_attrs[i].scale, index);
}
index += grid_h * grid_w;
}
#else
for (int i = 0; i < 3; i++) {
grid_h = app_ctx->output_attrs[i].dims[2];
grid_w = app_ctx->output_attrs[i].dims[3];
stride = model_in_h / grid_h;
if (app_ctx->is_quant) {
validCount += process_i8((int8_t *)_outputs[i].buf, grid_h, grid_w, stride, filterBoxes, objProbs,
classId, conf_threshold, app_ctx->output_attrs[i].zp, app_ctx->output_attrs[i].scale,index);
}
else
{
validCount += process_fp32((float *)_outputs[i].buf, grid_h, grid_w, stride, filterBoxes, objProbs,
classId, conf_threshold, app_ctx->output_attrs[i].zp, app_ctx->output_attrs[i].scale, index);
}
index += grid_h * grid_w;
}
#endif
// no object detect
if (validCount <= 0) {
return 0;
}
std::vector<int> indexArray;
for (int i = 0; i < validCount; ++i) {
indexArray.push_back(i);
}
quick_sort_indice_inverse(objProbs, 0, validCount - 1, indexArray);
std::set<int> class_set(std::begin(classId), std::end(classId));
for (auto c : class_set) {
nms(validCount, filterBoxes, classId, indexArray, c, nms_threshold);
}
int last_count = 0;
od_results->count = 0;
/* box valid detect target */
for (int i = 0; i < validCount; ++i) {
if (indexArray[i] == -1 || last_count >= OBJ_NUMB_MAX_SIZE) {
continue;
}
int n = indexArray[i];
float x1 = filterBoxes[n * 5 + 0] - letter_box->x_pad;
float y1 = filterBoxes[n * 5 + 1] - letter_box->y_pad;
float w = filterBoxes[n * 5 + 2];
float h = filterBoxes[n * 5 + 3];
int keypoints_index = (int)filterBoxes[n * 5 + 4];
// Calculate total anchor points dynamically from output tensor
int total_anchors = index; // Total anchor points from all scales
for (int j = 0; j < 17; ++j) {
if (app_ctx->is_quant) {
#ifdef RKNPU1
od_results->results[last_count].keypoints[j][0] = (deqnt_affine_u8_to_f32(((uint8_t *)_outputs[3].buf)[(j * 3 + 0) * total_anchors + keypoints_index],
app_ctx->output_attrs[3].zp, app_ctx->output_attrs[3].scale)- letter_box->x_pad)/ letter_box->scale;
od_results->results[last_count].keypoints[j][1] = (deqnt_affine_u8_to_f32(((uint8_t *)_outputs[3].buf)[(j * 3 + 1) * total_anchors + keypoints_index],
app_ctx->output_attrs[3].zp, app_ctx->output_attrs[3].scale)- letter_box->y_pad)/ letter_box->scale;
od_results->results[last_count].keypoints[j][2] = deqnt_affine_u8_to_f32(((uint8_t *)_outputs[3].buf)[(j * 3 + 2) * total_anchors + keypoints_index],
app_ctx->output_attrs[3].zp, app_ctx->output_attrs[3].scale);
#else
// RKNPU2: INT8 quantized output, use deqnt_affine_to_f32
od_results->results[last_count].keypoints[j][0] = (deqnt_affine_to_f32(((int8_t *)_outputs[3].buf)[(j*3+0)*total_anchors+keypoints_index],
app_ctx->output_attrs[3].zp, app_ctx->output_attrs[3].scale) - letter_box->x_pad) / letter_box->scale;
od_results->results[last_count].keypoints[j][1] = (deqnt_affine_to_f32(((int8_t *)_outputs[3].buf)[(j*3+1)*total_anchors+keypoints_index],
app_ctx->output_attrs[3].zp, app_ctx->output_attrs[3].scale) - letter_box->y_pad) / letter_box->scale;
od_results->results[last_count].keypoints[j][2] = deqnt_affine_to_f32(((int8_t *)_outputs[3].buf)[(j*3+2)*total_anchors+keypoints_index],
app_ctx->output_attrs[3].zp, app_ctx->output_attrs[3].scale);
#endif
}
else
{
od_results->results[last_count].keypoints[j][0] = (((float *)_outputs[3].buf)[(j*3+0)*total_anchors+keypoints_index]
- letter_box->x_pad)/ letter_box->scale;
od_results->results[last_count].keypoints[j][1] = (((float *)_outputs[3].buf)[(j*3+1)*total_anchors+keypoints_index]
- letter_box->y_pad)/ letter_box->scale;
od_results->results[last_count].keypoints[j][2] = ((float *)_outputs[3].buf)[(j*3+2)*total_anchors+keypoints_index];
}
}
int id = classId[n];
float obj_conf = objProbs[i];
od_results->results[last_count].box.left = (int)(clamp(x1, 0, model_in_w) / letter_box->scale);
od_results->results[last_count].box.top = (int)(clamp(y1, 0, model_in_h) / letter_box->scale);
od_results->results[last_count].box.right = (int)(clamp(x1+w, 0, model_in_w) / letter_box->scale);
od_results->results[last_count].box.bottom = (int)(clamp(y1+h, 0, model_in_h) / letter_box->scale);
// od_results->results[last_count].box.angle = angle;
od_results->results[last_count].prop = obj_conf;
od_results->results[last_count].cls_id = id;
last_count++;
}
od_results->count = last_count;
return 0;
}
int init_post_process() {
int ret = 0;
ret = loadLabelName(LABEL_NALE_TXT_PATH, labels);
if (ret < 0) {
printf("Load %s failed!\n", LABEL_NALE_TXT_PATH);
return -1;
}
return 0;
}
char *coco_cls_to_name(int cls_id) {
if (cls_id >= OBJ_CLASS_NUM) {
return "null";
}
if (labels[cls_id]) {
return labels[cls_id];
}
return "null";
}
void deinit_post_process() {
for (int i = 0; i < OBJ_CLASS_NUM; i++) {
if (labels[i] != nullptr) {
free(labels[i]);
labels[i] = nullptr;
}
}
}2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
Compile
Navigate to the project directory:
cd rknn_model_zoo/Grant executable permission to build-linux.sh:
sudo chmod +x ./build-linux.shRun the build script:
./build-linux.sh -t <target> -a <arch> -d <build_demo_name> [-b <build_type>] [-m] [-r] [-j]
-t : target (rk356x/rk3576/rk3588/rv1106/rv1126b/rv1126/rk1808)
-a : arch (aarch64/armhf)
-d : demo name
-b : build_type(Debug/Release)
-m : enable address sanitizer, build_type need set to Debug
-r : disable rga, use cpu resize image
-j : disable libjpeg to avoid conflicts between libjpeg and opencv
# Run the RK3576-related command:
./build-linux.sh -t rk3576 -a aarch64 -d yolov8_pose2
3
4
5
6
7
8
9
10
11
Note: The
<demo name>parameter must match the target folder name inrknn_model_zoo/examplesbecause this parameter is used to select which Demo to compile.
The final generated install/ directory structure is as follows:
(base) lipeng@host:~/workspace/yolo11/rknn_model_zoo$ tree install
install
`-- rk3576_linux_aarch64
|-- rknn_yolov8_pose_demo
| |-- lib
| | |-- librga.so
| | `-- librknnrt.so
| |-- model
| | |-- bus.jpg
| | |-- yolov8_pose.rknn
| | `-- yolov8_pose_labels_list.txt
| `-- rknn_yolov8_pose_demo2
3
4
5
6
7
8
9
10
11
12
Board Demo Presentation
Transfer Files
Next, we need to transfer the rknn_model_zoo/install/rk3576_linux_aarch64/rknn_yolov8_pose_demo directory to the board:
It is recommended to use the
adbtool for transfer. The LCSC-TaishanPi-3M has ADB enabled by default. You can also use TF card, SSH, or USB drive.Refer to: https://wiki.lckfb.com/zh-hans/tspi-3-rk3576/system-usage/debian12-usage/adb-usage.html
adb push yolo11/rknn_model_zoo/install/rk3576_linux_aarch64/rknn_yolov8_pose_demo /home/lckfb/Running on Board
For details, please read: https://github.com/airockchip/rknn_model_zoo/blob/main/examples/yolo11/README.md
We enter the LCSC-TaishanPi-3M development board terminal and navigate to the rknn_yolov8_pose_demo/ directory:
# Navigate to the directory
cd rknn_yolov8_pose_demo/2
Set the dynamic library path (located in the ./lib subdirectory under the current directory):
# Set the dynamic library path (very important, otherwise errors will occur)
export LD_LIBRARY_PATH=./lib2
Grant executable permission to the demo:
sudo chmod +x rknn_yolov8_pose_demoRun the Demo:
# Command format: ./rknn_yolov8_pose_demo <RKNN model path> <input image path>
sudo ./rknn_yolov8_pose_demo model/yolov8_pose.rknn model/bus.jpg2
An
out.pngimage will be generated, containing the final detection results.