Tutorial: Running YOLOv5 Machine Learning Detection on a Raspberry Pi 4

Jordan Johnston
8 min readApr 8, 2021

--

Camera on a microcomputer
Camera on a Microcomputer

Introduction

Machine learning is becoming abundant in our modern world from detectors recognizing imperfections in fruit to determining specific tags in YouTube videos by observing frames. While it may seem that machine learning detection requires massive computers to perform, there are some applications that do not require such immense processing power and could be placed on embedded systems without a significant loss in performance.

YOLOv5 (You Only Look Once) is the latest version in a family of machine learning algorithms based on object detection, which means the detector searches for specific objects like people or cars in an image. There are multiple tutorials showing how to run YOLOv5 on a desktop, but the ability to run it on an embedded device (such as a Raspberry Pi) cannot be overlooked due to their low power consumption and versatility compared to full-sized computers. Embedded devices are computer systems that have all their RAM, CPU’s, and other accessories attached to a single board. The Raspberry Pi 4, for instance, is an embedded system with all processing components, USB slots, power ports, and much more built-in that allows it to run as a tiny computer for many purposes.

In this tutorial, I will walk you through my full installation process for YOLOv5 on a Raspberry Pi 4, and a final test to ensure it is working. While the tutorial can be easily followed by experts, I will be inserting some of the specific knowledge that I learned during development so that beginners or curious individuals can understand what we are working with here. With introductions out, let us begin!

Choosing Your Operating System

The Raspberry Pi 4 is fully capable of running 64-bit operating systems, unlike the Raspberry Pi 3 which only supported 1 GB of RAM for such OS’s. Because of this, the RPi4 is now open to a massive amount of 64-bit applications that it had no access to beforehand, including YOLOv5. I installed and tested YOLOv5 using a 16 GB microSD card running 64-bit Ubuntu, but since Linux is so general it may run fine on other operating systems.

If you want to follow my exact path, here is the link to download the 64-bit Ubuntu version I used (Ubuntu Desktop 20.10 for Raspberry Pi):

https://ubuntu.com/download/raspberry-pi

And the Raspberry Pi OS Imager software:

https://www.raspberrypi.org/software/

Follow the steps in the Pi OS Imager software to continue. Once your microSD card is imaged, insert it into your RPi4 and follow the Ubuntu setup instructions until complete.

Downloading YOLOv5

YOLOv5 can be downloaded and tinkered with from the official GitHub page (https://github.com/ultralytics/yolov5), but for this tutorial I will be using my own curated version of YOLOv5 that can be found at this GitHub link below. Use the command to copy my repository to your home directory (or any directory, but you must be aware of file-path names):

git clone https://github.com/jordan-johnston271/yolov5-on-rpi4-2020.git

If git is not recognized, you can download and update it with the command below. Make sure to start all your commands with “sudo”, as it gives you “Super User” privileges similar to Administrator in Windows:

sudo apt install git-all

Once you have the my repository downloaded, it should look identical to the image below with the four folders “dist-packages”, “johnston_yolov5”, “local-bin”, and “.git”:

The downloaded repository should have four files: “dist-packages”, “johnston_yolov5”, “local-bin”, and “.git”
After downloading the repository, you should have the four folders above

How to use my specific YOLOv5 repository for training and detection will be discussed in detail later.

Installing PyTorch and TorchVision

Now we enter the most arduous part of my YOLOv5 on embedded systems journey: different CPU architecture. Most desktops have Intel’s x86 CPU architecture that uses CISC (Complex Instruction Set Computing), while embedded systems are built with Arm CPU architecture that uses RISC (Reduced Instruction Set Computing). RISC uses simpler commands to perform actions, while CISC combines those smaller commands into single commands that take up more storage but simplify processes. This fundamental difference means that programs written for x86 cannot run on Arm without major adjustment, and vis versa.

Our issue is that PyTorch and TorchVision, open-source machine learning libraries in Python, only include installations for x86 machines in its stable download. However, the developers also have nightly builds available that work for aarch64 (ARM) devices, but they may be missing some of the functionality due to them still being in-development. For this tutorial, these nightly builds worked perfectly fine.

Before getting PyTorch and TorchVision, make sure your pip3 file installer is updated by using the following command. pip3 is the primary method to install Python libraries in Linux:

sudo apt-get install python3-pip

Next, install NumPy which is a large mathematical library for Python:

sudo pip3 install numpy

Finally, there are two ways to install PyTorch and Torchvision. One method is the simplest, where you ask pip3 to find the most suitable downloads for your system and install them automatically:

sudo pip3 install — pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html

If the above command installed PyTorch and TorchVision, move to the next section.

Since the above command uses nightly builds that are subject to change, I cannot guarantee that it will work for years to come. Thus, the other method will install the exact PyTorch and TorchVision libraries that I used onto your system. This requires you entering a couple directories where Python libraries are stored. I am using Python3.8, but it will probably be the similar for future versions of Python.

First, move to the “yolov5-on-rpi4–2020” directory in the terminal. Then, enter the following command to copy the Python libraries in the “dist-packages” folder into your Python 3.8 library directory. This may need some small adjustments for future versions of Python:

sudo cp -r dist-packages/* /usr/local/lib/python3.8/dist-packages/

Make sure your caffe2, torch, torch-1.8.0, torchvision, and torchvision-0.9.0 files look like this in the dist-packages directory:

After using the copy command above, your “dist-packages” folder should now include the folders “torchvision-0.9.0.dev20201210.dist-info”, “torchvision”, “torch-1.8.0.dev20201210.dist-info”, “torch”, and “caffe2”
Your “dist-packages” directory should include the five folders shown above

Then, copy the files in the “local-bin” folder into your RPi4’s local bin using the following command:

sudo cp -r local-bin/* /usr/local/bin/

Ensure the Python programs “convert-caffe2-to-onnx” and “convert-onnx-to-caffe2” are included:

After using the copy command above, your /usr/local/bin/ directory should include the two Python files “convert-caffe2-to-onnx” and “convert-onnx-to-caffe2”
After using the command above, you should have these two files in your local bin

Lastly, download a couple other Python libraries that are necessary for Pytorch.

sudo pip3 install typing-extensions

A Small PyTorch Change

Since we installed PyTorch from a nightly build, we expect some modules to be missing or altered from the stable release. Thankfully, YOLOv5 only requires one change in PyTorch’s code to function properly. You can edit the Python code in a text editor, but nano (built into Linux) is easiest to follow in a couple commands. Start by opening “activation.py” in nano:

sudo nano /usr/local/lib/python3.8/dist-packages/torch/nn/modules/activation.py

Next, search by hitting CTRL+W and type “return F.hardswish”. Once found, delete “self.inplace” from the arguments. If you installed my versions of PyTorch and TorchVision, this will be done for you. It is good to check on it anyways. Finally, save and exit with CTRL-X. The only change is ‘return F.hardswish(input, self.inplace)’ to ‘return f.hardswish(input)’ as shown below:

The only change we need to make in PyTorch’s “activation.py” file is “F.hardswish(input, self.inplace)” to “F.hardswish(input)”
The only change we need to make in PyTorch’s “activation.py” file is “F.hardswish(input, self.inplace)” to “F.hardswish(input)”

A quick test to ensure PyTorch is downloaded is as follows:

python3

import torch

import torchvision

x = torch.rand(5, 3)

print(x)

quit() to exit Python3

After using the commands above, your output after printing “torch.rand(5,3)” should be a 5 x 3 matrix of random decimal values. This means PyTorch and Torchvision were installed successfully.
How the above code should look and output, if PyTorch and Torchvision were installed successfully

Installing Other YOLOv5 Requirements

Now that PyTorch and TorchVision are taken care of, the rest of the Python libraries can be easily installed in two steps. First, enter the yolov5 folder in the command terminal. If the folder is in your home directory and unchanged, you can use this:

cd yolov5-on-rpi4–2020/johnston_yolov5/yolov5/

Once in yolov5, use this next command to install the necessary libraries for YOLOv5. This may take a long while, depending on your internet and device:

sudo pip3 install -r requirements.txt

Now, YOLOv5 should be installed and ready to use!

Testing YOLOv5

Once all libraries are downloaded, we are ready to test YOLOv5 on the Raspberry Pi 4! One last note though, ignore the warnings that you will see below when running the detector. Because of that small module we altered, the program believes there will be an open loop, but this never actually happens. The detector runs perfectly fine. So, make your way to the yolov5 directory and run this command for a pre-trained detector to find people and other objects (you can find this command in the “commands.txt” file too):

python3 detect.py –weights yolov5s.pt — img 416 — conf 0.85 — source inference/images/

And in the inference/output/ folder, you should see multiple pictures including this one:

After running the YOLOv5 detector with the above command, one of the images in your /output/ directory should have three people in suits with the “person” classification over them. This means your detector successfully found the objects it was looking for!
A successful output after running the YOLOv5 detector

There you have it! YOLOv5 running on a Raspberry Pi 4. One of the latest machine learning detectors running on an embedded system, ready for all kinds of innovation and hobbyist projects! Thank you for following this tutorial, and please comment or send questions if you would like. Good luck with the new detector setup!

Further YOLOv5 Discussion

The tutorial is officially complete, and you have successfully installed YOLOv5 on your Raspberry Pi 4. But now you need to learn how to use it, so let me give some parting advice on how to use my curated YOLOv5 folder. You can do further research from these quick points.

Training

Training should not be done on a Raspberry Pi because it is highly resource intensive, use a desktop. To train the YOLOv5 detector, you must use annotated images in YOLO format. This means an image file (jpeg or otherwise) with an identically named text file that has the boxed object written as “class x_center y_center width height”. Roboflow has great databases and converters for this (https://public.roboflow.com/). Once you have the training dataset, place the images to be trained in /coco128/images/train2017/, and their corresponding text files in /coco128/labels/train2017/.

Then, enter the yolov5 directory in the command terminal and enter the following command:

python3 train.py — img 320 — batch 16 — epochs 300 — data coco128.yaml — cfg yolov5s.yaml — weights yolov5s.pt — cache-images

The training will begin, and it will take multiple hours or a couple days, depending on your system. When the training is done, the weights and training data will be stored in the /yolov5/runs/ directory. ‘epochs’ is how many training runs will be done using your training dataset.

Detection

Detection can be done on the Raspberry Pi using the following command:

python3 detect.py –weights yolov5s.pt — img 416 — conf 0.85 — source inference/images/

The weights can be changed to your own trained one if you have done so, ‘img’ is the size of the image in pixels, ‘conf’ is the minimum confidence that the detector must have for the object to be boxed on the output image, and the ‘source’ can be changed to any directory with images.

--

--

Jordan Johnston
Jordan Johnston

Written by Jordan Johnston

Electrical Engineering Bachelor’s Degree at Sonoma State University 2021 (Graduation with Distinction and Dean’s List)

Responses (3)