YOLO11 Introduction
YOLO11 is the next-generation YOLO model iteration for detection/segmentation/pose tasks in the Ultralytics family. It continues the YOLO series' design philosophy of "single-stage, end-to-end, real-time" and improves the accuracy-speed tradeoff through improved network architecture, feature fusion, and training/inference strategies. It provides different scales from n/s/m/l/x to adapt deployment needs from edge to server.
💡 Tip
In YOLO-related model naming, letters like n / s / m / l / x typically indicate model size, meaning different network width/depth configurations that trade off parameters, computation, and accuracy/speed.
Common meanings:
- n = nano: Smallest, fastest, relatively lower accuracy, suitable for edge/low compute
- s = small: Small
- m = medium: Medium
- l = large: Large
- x = xlarge / extra-large: Largest, slowest, typically highest accuracy
Objective
We will deploy the model to the LCSC-TaishanPi-3M-RK3576 board and demonstrate using the official Demo from rknn_model_zoo.
Environment Preparation
- Host Environment: Ubuntu22.04 (x86)
- Development Board: LCSC-TaishanPi-3M-RK3576
- Data Cable: Connect PC and development board for ADB file transfer.
Install miniforge3
To prevent Python environment issues caused by different environments on a single host, we use miniforge3 for management.
Install miniforge3:
# Download miniforge3 installation script
wget -c https://mirrors.bfsu.edu.cn/github-release/conda-forge/miniforge/LatestRelease/Miniforge3-Linux-x86_64.sh
# Run the installation script
bash Miniforge3-Linux-x86_64.sh
# 1. Press Enter to continue
# 2. Use the down arrow to scroll through the agreement
# 3. Enter yes at the end
# 4. When prompted "Proceed with initialization?", enter yes2
3
4
5
6
7
8
9
10
You can check https://mirrors.bfsu.edu.cn/github-release/conda-forge/miniforge/LatestRelease/ to find the current latest
.shfilename.
Initialize the conda environment variable:
source ~/miniforge3/bin/activateAfter success, a
(base)will appear at the beginning of the command line.
Create rknn-toolkit2 Environment
Create and activate a Conda environment: YOLO11-RKNN-Toolkit2 (Python 3.10 is recommended)
This will be needed later when converting the ONNX model to RKNN model.
# Create environment
conda create -n YOLO11-RKNN-Toolkit2 python=3.10
# When prompted "Proceed ([y]/n)?"
# Enter y2
3
4
5
Activate the Conda environment:
conda activate YOLO11-RKNN-Toolkit2
# After activation, (YOLO11-RKNN-Toolkit2) will appear at the beginning of the command line2
3
Install dependencies:
# Install rknn-toolkit2
pip install rknn-toolkit2 -i https://mirrors.aliyun.com/pypi/simple
# Install onnx==1.18.0
pip install onnx==1.18.0 -i https://mirrors.aliyun.com/pypi/simple2
3
4
5
After installation, exit the YOLO11-RKNN-Toolkit2 environment:
conda deactivateCreate yolo11 Environment
Create and activate a Conda environment: Tspi3-YOLO11 (Python 3.10 is recommended)
# Create environment
conda create -n Tspi3-YOLO11 python=3.10
# When prompted "Proceed ([y]/n)?"
# Enter y2
3
4
5
Activate the Conda environment:
conda activate Tspi3-YOLO11
# After activation, (TaishanPi3-YOLO11) will appear at the beginning of the command line2
3
Install dependencies for YOLO11:
pip install ultralytics onnx onnxscript -i https://mirrors.aliyun.com/pypi/simpleTest:
(Tspi3-YOLO11) lipeng@host:~/workspace$ yolo -v
8.3.2482
Model Conversion
Next, we need to execute three important steps:
- Pull the .pt file.
- Use the Rockchip-optimized yolo11 project to export ONNX model.
- Use rknn-toolkit2 to convert the ONNX model to a hardware-accelerated RKNN model.
Pull .pt File
The .pt file is the trained YOLO11 model weights (parameters). Only by obtaining this file can we perform object detection.
Otherwise, even with YOLO11 code, it's just an empty shell and cannot complete detection.
At https://github.com/ultralytics/assets/releases/, Ultralytics provides official .pt weight files. We just need to download what we need:
wget https://github.com/ultralytics/assets/releases/download/v8.3.0/yolo11n.ptExport ONNX Model
Next, we need to pull the Rockchip officially modified ultralytics_yolo11 project, which has been specifically adapted for RKNPU:
- Modified output structure, removed post-processing. (Post-processing results are not friendly to quantization)
- DFL structure has poor performance on NPU, moved to post-processing stage outside the model. This operation can improve inference performance in most cases.
- Added sum of confidence scores to model output branches for accelerated threshold filtering during post-processing.
Details: https://github.com/airockchip/ultralytics_yolo11/blob/main/RKOPT_README.zh-CN.md
Continue using the Tspi3-YOLO11 environment:
conda activate Tspi3-YOLO11Pull the airockchip/ultralytics_yolo11 project:
git clone https://github.com/airockchip/ultralytics_yolo11.gitAfter pulling, navigate to the directory:
cd ultralytics_yolo11Modify the model in the ultralytics_yolo11/ultralytics/cfg/default.yaml file to the absolute path of the .pt file you just pulled:
Fill in according to your
.ptfile path.
(Tspi3-YOLO11) lipeng@host:~/workspace/ultralytics_yolo11$ git diff
diff --git a/ultralytics/cfg/default.yaml b/ultralytics/cfg/default.yaml
index 97f7239e0..fa1f915bb 100644
--- a/ultralytics/cfg/default.yaml
+++ b/ultralytics/cfg/default.yaml
@@ -5,7 +5,7 @@ task: detect # (str) YOLO task, i.e. detect, segment, classify, pose, obb
mode: train # (str) YOLO mode, i.e. train, val, predict, export, track, benchmark
# Train settings -------------------------------------------------------------------------------------------------------
-model: yolo11n.pt # (str, optional) path to model file, i.e. yolo11n.pt, yolo11n.yaml
+model: /home/lipeng/workspace/yolo11n.pt # (str, optional) path to model file, i.e. yolo11n.pt, yolo11n.yaml
data: # (str, optional) path to data file, i.e. coco8.yaml
epochs: 100 # (int) number of epochs to train for
time: # (float, optional) number of hours to train for, overrides epochs if supplied2
3
4
5
6
7
8
9
10
11
12
13
14
Set the export path to the current directory:
export PYTHONPATH=./Use the script to start exporting the ONNX model:
python ./ultralytics/engine/exporter.pyONNX to RKNN
Exit the Tspi3-YOLO11 environment:
conda deactivateEnter the YOLO11-RKNN-Toolkit2 environment:
conda activate YOLO11-RKNN-Toolkit2Next, we will use the conversion script from rknn_model_zoo to convert ONNX to RKNN model. Pull the project:
git clone https://github.com/airockchip/rknn_model_zoo.gitNavigate to the rknn_model_zoo/examples/yolo11/python directory:
cd rknn_model_zoo/examples/yolo11/pythonRun the rknn_model_zoo/examples/yolo11/python/convert.py script to convert to RKNN model:
# Syntax: python3 convert.py onnx_model_path [platform] [dtype] [output_rknn_path]
## platform: [rk3562, rk3566, rk3568, rk3576, rk3588, rv1126b, rv1109, rv1126, rk1808]
## dtype: [i8, fp] for [rk3562, rk3566, rk3568, rk3576, rk3588, rv1126b]
## dtype: [u8, fp] for [rv1109, rv1126, rk1808]
python convert.py /home/lipeng/workspace/yolo11n.onnx rk3576 i82
3
4
5
6
platformoptions includerk3562,rk3566,rk3568,rk3576,rk3588,rv1126b,rv1109,rv1126,rk1808
dtype:
- Select
i8orfpfor platforms:rk3562,rk3566,rk3568,rk3576,rk3588,rv1126b- Select
u8orfpfor platforms:rv1109,rv1126,rk1808
After successful execution, a .rknn model file will be generated in the rknn_model_zoo/examples/yolo11/model directory.
Demo Compilation
Overview
The official Rockchip open-source project uses C++ written demos. You can compile the sample code directly by running:
rknn_model_zoo/build-linux.shrknn_model_zoo/build-android.sh
These two scripts (replacing cross-compilation paths with actual paths) compile the sample code directly.
In the deploy directory, a install/demo_Linux_aarch64 or install/demo_Android_aarch64 folder will be generated, containing imgenc, llm, demo, and lib folders.
Exit Environment
conda deactivateWhen (base) appears at the beginning of the command line, it's done.
Install Cross-Compiler
We need to compile the Demo on the PC to generate files and run them on the LCSC-TaishanPi-3M-RK3576 board. So we directly use apt to install aarch64-linux-gnu:
sudo apt update && \
sudo apt install -y cmake make gcc-aarch64-linux-gnu g++-aarch64-linux-gnu2
Compile
Navigate to the project directory:
cd rknn_model_zoo/Grant executable permission to build-linux.sh:
sudo chmod +x ./build-linux.shRun the build script:
./build-linux.sh -t <target> -a <arch> -d <build_demo_name> [-b <build_type>] [-m] [-r] [-j]
-t : target (rk356x/rk3576/rk3588/rv1106/rv1126b/rv1126/rk1808)
-a : arch (aarch64/armhf)
-d : demo name
-b : build_type(Debug/Release)
-m : enable address sanitizer, build_type need set to Debug
-r : disable rga, use cpu resize image
-j : disable libjpeg to avoid conflicts between libjpeg and opencv
# Run the RK3576-related YOLO11 command:
./build-linux.sh -t rk3576 -a aarch64 -d yolo112
3
4
5
6
7
8
9
10
11
Note: The
<demo name>parameter must match the target folder name inrknn_model_zoo/examplesbecause this parameter is used to select which Demo to compile.
The final generated install/ directory structure is as follows:
install/
`-- rk3576_linux_aarch64
`-- rknn_yolo11_demo
|-- lib
| |-- librga.so
| `-- librknnrt.so
|-- model
| |-- bus.jpg
| |-- coco_80_labels_list.txt
| `-- yolo11.rknn
|-- rknn_yolo11_demo
`-- rknn_yolo11_demo_zero_copy
4 directories, 7 files2
3
4
5
6
7
8
9
10
11
12
13
14
Board Demo Presentation
Transfer Files
Next, we need to transfer the rknn_model_zoo/install/rk3576_linux_aarch64/rknn_yolo11_demo directory to the board:
It is recommended to use the
adbtool for transfer. The LCSC-TaishanPi-3M has ADB enabled by default. You can also use TF card, SSH, or USB drive.Refer to: https://wiki.lckfb.com/zh-hans/tspi-3-rk3576/system-usage/debian12-usage/adb-usage.html
adb push rknn_model_zoo/install/rk3576_linux_aarch64/rknn_yolo11_demo /home/lckfb/Running on Board
For details, please read: https://github.com/airockchip/rknn_model_zoo/blob/main/examples/yolo11/README.md
We enter the LCSC-TaishanPi-3M development board terminal and navigate to the rknn_yolo11_demo/ directory:
# Navigate to the directory
cd rknn_yolo11_demo/2
Set the dynamic library path (located in the ./lib subdirectory under the current directory):
# Set the dynamic library path (very important, otherwise errors will occur)
export LD_LIBRARY_PATH=./lib2
Grant executable permission to the demo:
sudo chmod +x rknn_yolo11_demoRun the Demo:
# Command format: ./rknn_yolo11_demo <RKNN model path> <input image path>
sudo ./rknn_yolo11_demo model/yolo11.rknn model/bus.jpg2
An
out.pngimage will be generated in the parent directory ofrknn_yolo11_demo, containing the final detection results.