YOLOv8 Introduction
YOLOv8 is a state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further enhance performance and flexibility.
💡 Tip
In YOLO-related model naming, letters like n / s / m / l / x typically indicate model size, meaning different network width/depth configurations that trade off parameters, computation, and accuracy/speed.
Common meanings:
- n = nano: Smallest, fastest, relatively lower accuracy, suitable for edge/low compute
- s = small: Small
- m = medium: Medium
- l = large: Large
- x = xlarge / extra-large: Largest, slowest, typically highest accuracy
Objective
We will deploy the model to the LCSC-TaishanPi-3M-RK3576 board and demonstrate using the official Demo from rknn_model_zoo.
Environment Preparation
- Host Environment: Ubuntu22.04 (x86)
- Development Board: LCSC-TaishanPi-3M-RK3576
- Data Cable: Connect PC and development board for ADB file transfer.
Install miniforge3
To prevent Python environment issues caused by different environments on a single host, we use miniforge3 for management.
Install miniforge3:
# Download miniforge3 installation script
wget -c https://mirrors.bfsu.edu.cn/github-release/conda-forge/miniforge/LatestRelease/Miniforge3-Linux-x86_64.sh
# Run the installation script
bash Miniforge3-Linux-x86_64.sh
# 1. Press Enter to continue
# 2. Use the down arrow to scroll through the agreement
# 3. Enter yes at the end
# 4. When prompted "Proceed with initialization?", enter yes2
3
4
5
6
7
8
9
10
You can check https://mirrors.bfsu.edu.cn/github-release/conda-forge/miniforge/LatestRelease/ to find the current latest
.shfilename.
Initialize the conda environment variable:
source ~/miniforge3/bin/activateAfter success, a
(base)will appear at the beginning of the command line.
Create rknn-toolkit2 Environment
Create and activate a Conda environment: YOLOv8-RKNN-Toolkit2 (Python 3.10 is recommended)
This will be needed later when converting the ONNX model to RKNN model.
# Create environment
conda create -n YOLOv8-RKNN-Toolkit2 python=3.10
# When prompted "Proceed ([y]/n)?"
# Enter y2
3
4
5
Activate the Conda environment:
conda activate YOLOv8-RKNN-Toolkit2
# After activation, (YOLOv8-RKNN-Toolkit2) will appear at the beginning of the command line2
3
Install dependencies:
# Install rknn-toolkit2
pip install rknn-toolkit2 -i https://mirrors.aliyun.com/pypi/simple
# Install onnx==1.18.0
pip install onnx==1.18.0 -i https://mirrors.aliyun.com/pypi/simple2
3
4
5
After installation, exit the YOLOv8-RKNN-Toolkit2 environment:
conda deactivateCreate YOLOv8 Environment
Create and activate a Conda environment: Tspi3-YOLOv8 (Python 3.10 is recommended)
# Create environment
conda create -n Tspi3-YOLOv8 python=3.10
# When prompted "Proceed ([y]/n)?"
# Enter y2
3
4
5
Activate the Conda environment:
conda activate Tspi3-YOLOv8
# After activation, (TaishanPi3-YOLOv8) will appear at the beginning of the command line2
3
Install dependencies for YOLOv8:
# Basic dependencies
pip install tqdm numpy opencv-python torch torchvision pillow matplotlib pyyaml requests scipy pandas seaborn -i https://mirrors.aliyun.com/pypi/simple
# Install ultralytics
pip install ultralytics -i https://mirrors.aliyun.com/pypi/simple2
3
4
5
Test:
(Tspi3-YOLOv8) lipeng@host:~/workspace$ yolo -v
8.3.2482
Model Conversion
Next, we need to execute three important steps:
- Pull the .pt file.
- Use the Rockchip-optimized yolov8 project to export ONNX model.
- Use rknn-toolkit2 to convert the ONNX model to a hardware-accelerated RKNN model.
Pull .pt File
The .pt file is the trained YOLOv8 model weights (parameters). Only by obtaining this file can we perform object detection.
Otherwise, even with YOLOv8 code, it's just an empty shell and cannot complete detection.
At https://github.com/ultralytics/assets/releases/, Ultralytics provides official .pt weight files. We just need to download what we need:
wget https://github.com/ultralytics/assets/releases/download/v8.3.0/yolov8n-seg.ptExport ONNX Model
Next, we need to pull the Rockchip officially modified ultralytics_yolov8 project, which has been specifically adapted for RKNPU:
- Modified output structure, removed post-processing. (Post-processing results are not friendly to quantization)
- DFL structure has poor performance on NPU, moved to post-processing stage outside the model. This operation can improve inference performance in most cases.
- Added sum of confidence scores to model output branches for accelerated threshold filtering during post-processing.
Details: https://github.com/airockchip/ultralytics_yolov8/blob/main/RKOPT_README.zh-CN.md
Continue using the Tspi3-YOLOv8 environment:
conda activate Tspi3-YOLOv8Pull the airockchip/ultralytics_yolov8 project:
git clone https://github.com/airockchip/ultralytics_yolov8.gitAfter pulling, navigate to the directory:
cd ultralytics_yolov8Modify the model in the ultralytics_yolov8/ultralytics/cfg/default.yaml file to the absolute path of the .pt file you just pulled:
Fill in according to your
.ptfile path.
(base) lipeng@host:~/workspace/yolov8/ultralytics_yolov8$ git diff
diff --git a/ultralytics/cfg/default.yaml b/ultralytics/cfg/default.yaml
index 8a77a7a8..7074c27c 100644
--- a/ultralytics/cfg/default.yaml
+++ b/ultralytics/cfg/default.yaml
@@ -5,7 +5,7 @@ task: detect # (str) YOLO task, i.e. detect, segment, classify, pose
mode: train # (str) YOLO mode, i.e. train, val, predict, export, track, benchmark
# Train settings -------------------------------------------------------------------------------------------------------
-model: yolov8n.pt # (str, optional) path to model file, i.e. yolov8n.pt, yolov8n.yaml
+model: /home/lipeng/workspace/yolov8/yolov8n-seg.pt # (str, optional) path to model file, i.e. yolov8n.pt, yolov8n.yaml
data: # (str, optional) path to data file, i.e. coco8.yaml
epochs: 100 # (int) number of epochs to train for
time: # (float, optional) number of hours to train for, overrides epochs if supplied2
3
4
5
6
7
8
9
10
11
12
13
14
Set the export path to the current directory:
export PYTHONPATH=./Use the script to start exporting the ONNX model:
The ONNX model will be saved in a path displayed in the terminal.
python ./ultralytics/engine/exporter.pyONNX to RKNN
Exit the Tspi3-YOLOv8 environment:
conda deactivateEnter the YOLOv8-RKNN-Toolkit2 environment:
conda activate YOLOv8-RKNN-Toolkit2Next, we will use the conversion script from rknn_model_zoo to convert ONNX to RKNN model. Pull the project:
git clone https://github.com/airockchip/rknn_model_zoo.gitNavigate to the rknn_model_zoo/examples/yolov8_seg/python directory:
cd rknn_model_zoo/examples/yolov8_seg/pythonRun the rknn_model_zoo/examples/yolov8_seg/python/convert.py script to convert to RKNN model:
# Syntax: python3 convert.py onnx_model_path [platform] [dtype] [output_rknn_path]
## platform: [rk3562, rk3566, rk3568, rk3576, rk3588, rv1126b, rv1109, rv1126, rk1808]
## dtype: [i8, fp] for [rk3562, rk3566, rk3568, rk3576, rk3588, rv1126b]
## dtype: [u8, fp] for [rv1109, rv1126, rk1808]
python convert.py /home/lipeng/workspace/yolov8/yolov8n-seg.onnx rk3576 i82
3
4
5
6
platformoptions includerk3562,rk3566,rk3568,rk3576,rk3588,rv1126b,rv1109,rv1126,rk1808
dtype:
- Select
i8orfpfor platforms:rk3562,rk3566,rk3568,rk3576,rk3588,rv1126b- Select
u8orfpfor platforms:rv1109,rv1126,rk1808
After successful execution, a .rknn model file will be generated in the rknn_model_zoo/examples/yolov8/model directory.
Demo Compilation
Overview
The official Rockchip open-source project uses C++ written demos. You can compile the sample code directly by running:
rknn_model_zoo/build-linux.shrknn_model_zoo/build-android.sh
These two scripts (replacing cross-compilation paths with actual paths) compile the sample code directly.
In the deploy directory, a install/demo_Linux_aarch64 or install/demo_Android_aarch64 folder will be generated, containing imgenc, llm, demo, and lib folders.
Exit Environment
conda deactivateWhen (base) appears at the beginning of the command line, it's done.
Install Cross-Compiler
We need to compile the Demo on the PC to generate files and run them on the LCSC-TaishanPi-3M-RK3576 board. So we directly use apt to install aarch64-linux-gnu:
sudo apt update && \
sudo apt install -y cmake make gcc-aarch64-linux-gnu g++-aarch64-linux-gnu2
Compile
Navigate to the project directory:
cd rknn_model_zoo/Grant executable permission to build-linux.sh:
sudo chmod +x ./build-linux.shRun the build script:
./build-linux.sh -t <target> -a <arch> -d <build_demo_name> [-b <build_type>] [-m] [-r] [-j]
-t : target (rk356x/rk3576/rk3588/rv1106/rv1126b/rv1126/rk1808)
-a : arch (aarch64/armhf)
-d : demo name
-b : build_type(Debug/Release)
-m : enable address sanitizer, build_type need set to Debug
-r : disable rga, use cpu resize image
-j : disable libjpeg to avoid conflicts between libjpeg and opencv
# Run the RK3576-related YOLOv8 segmentation command:
./build-linux.sh -t rk3576 -a aarch64 -d yolov8_seg2
3
4
5
6
7
8
9
10
11
Note: The
<demo name>parameter must match the target folder name inrknn_model_zoo/examplesbecause this parameter is used to select which Demo to compile.
The final generated install/ directory structure is as follows:
(base) lipeng@host:~/workspace/yolov8/rknn_model_zoo$ tree install/
install/
`-- rk3576_linux_aarch64
`-- rknn_yolov8_seg_demo
|-- lib
| |-- librga.so
| `-- librknnrt.so
|-- model
| |-- bus.jpg
| |-- coco_80_labels_list.txt
| `-- yolov8_seg.rknn
`-- rknn_yolov8_seg_demo
4 directories, 6 files2
3
4
5
6
7
8
9
10
11
12
13
14
Board Demo Presentation
Transfer Files
Next, we need to transfer the rknn_model_zoo/install/rk3576_linux_aarch64/rknn_yolov8_seg_demo directory to the board:
It is recommended to use the
adbtool for transfer. The LCSC-TaishanPi-3M has ADB enabled by default. You can also use TF card, SSH, or USB drive.Refer to: https://wiki.lckfb.com/zh-hans/tspi-3-rk3576/system-usage/debian12-usage/adb-usage.html
adb push rknn_model_zoo/install/rk3576_linux_aarch64/rknn_yolov8_seg_demo /home/lckfb/Running on Board
For details, please read: https://github.com/airockchip/rknn_model_zoo/blob/main/examples/yolo11/README.md
We enter the LCSC-TaishanPi-3M development board terminal and navigate to the rknn_yolov8_seg_demo/ directory:
# Navigate to the directory
cd rknn_yolov8_seg_demo/2
Set the dynamic library path (located in the ./lib subdirectory under the current directory):
# Set the dynamic library path (very important, otherwise errors will occur)
export LD_LIBRARY_PATH=./lib2
Grant executable permission to the demo:
sudo chmod +x rknn_yolov8_seg_demoRun the Demo:
# Command format: ./rknn_yolov8_seg_demo <RKNN model path> <input image path>
sudo ./rknn_yolov8_seg_demo model/yolov8_seg.rknn model/bus.jpg2
An
out.pngimage will be generated in the parent directory ofrknn_yolov8_seg_demo, containing the final detection results.