Pi-5+Hailo8L AI powered Object recognition & distance measurement
Prelude: Imagine a vehicle cruising at 54 km/h — which is approximately the speed of an object moving 0.5 meters per frame at 30 frames per second (0.5 × 30 × 3.6 = 54 km/h). Mounted on the vehicle’s roof is a camera system connected to a powerful Raspberry Pi 5, paired with the Hailo-8L AI accelerator. This compact yet high-performance setup forms a real-time vision system capable of detecting objects ahead and estimating their distances from the vehicle. When an object enters a predefined safety zone, the system can immediately trigger warnings or even engage emergency braking — enhancing both awareness and response time.
Of course, object recognition through cameras is just one piece of the autonomous driving puzzle. Modern vehicles often combine it with LiDAR technology, which uses laser-based Time-of-Flight (ToF) measurements to calculate precise distances and build a detailed 3D map of the surroundings. However, while LiDAR excels at spatial accuracy, it lacks contextual understanding. That’s where vision-based systems — such as those powered by YOLO (You Only Look Once) — play a vital role, identifying not just shapes but the actual type and nature of objects.
To illustrate this imagine LiDAR scans only the tail of an elephant crossing the road. It might register it as a narrow, slender object. But a vision-based AI system can recognize the visible tail as part of a much larger animal and correctly alert the system to the presence of a large, potentially dangerous obstacle ahead. Even if some thing is part behind another bigger object both the objects would be recognised by segmentation model using a good quality camera. In this way, combining both technologies ensures safer and more intelligent decision-making.
This vision processing is computationally intensive. That’s where the Hailo-8L shines, capable of delivering up to 13 TOPS (Tera Operations Per Second), it handles the heavy lifting of object detection and segmentation. The Raspberry Pi 5, in turn, manages calculations, logic, and control. Together, they form a fast, efficient, and intelligent perception system — well-suited even for real-time operation at 30 fps.
Notably, Tesla’s approach to autonomous driving relies solely on vision-based AI without using LiDAR — a design decision that reduces hardware cost while still delivering robust performance.
This project: Using an AI Accelerator HAT on a Raspberry Pi 5, we aim to develop a production-grade AI system that runs at a minimum of 30 frames per second. Designed to be mounted on top of a vehicle, the setup continuously monitors the road ahead, detects objects in real time, and calculates their distances from the vehicle. These live distance readings are displayed on the terminal. If any object comes closer than a predefined safety threshold, the system may triggers an alert through it’s GPIO pins. However, this triggering part which is very easy is not designed yet.
Hardware:
- Raspberry Pi -5 8GB or 16GB [robu.in / alibaba.com INR:8K to 13K]
- Hailo-8L AI Accelerator for Pi-5 13 TOPS to 26 TOPS [robu.in / alibaba.com INR:7K to 11K]
- Pi-5 Active cooler - [robu.in INR:500]
- Pi-5 Camera [Camera Module 3 or HQ USB camera [robu.in / alibaba.com / amazon.in : INR 3K]
- Pi-5 power adapter etc.
Note: Since the AI HAT fully encloses the main processor, the Raspberry Pi 5 tends to heat up significantly. To prevent overheating, I strongly recommend using additional cooling. In our setup, only a specific fixed-type cooler fits the narrow space available once the AI HAT is mounted. The complete setup and individual components are shown below. Also, ensure that the Pi-5 is powered by a high-quality 5V, 4A power adapter to maintain stable performance.
With active cooling, you can safely overclock the Raspberry Pi 5 to extract additional performance—at the cost of slightly increased power consumption and heat generation. The fan connector from the cooler should be plugged into the designated fan header on the Pi-5 board. When paired with the AI HAT, the heavy-lifting of YOLO-based AI processing is handled by the accelerator, while the Pi-5 focuses on computations and control logic. This division of labor enables smooth and efficient operation, even at 30 frames per second!
Software:
First ensure installation of a 64-bit Raspberry Pi OS Bookworm software. This project will not work on 32 bit Pi OS. We recommend a good quality camera module 3 or better to be used.
$> sudo apt-get update
$> sudo apt-get upgrade
The most convenient way to operate a Raspberry Pi computer is by accessing it from your desktop/laptop
using ssh command, ssh bera@192.168.x.x when it asks for password please provide the password and
you will be inside the Raspi-5. However, this is not absolute necessary. You can do the same by connecting
a display, keyboard & mouse with your Raspi-5
By default, Pi-5 uses Gen 2.0 speeds. To make it Gen 3.0 speeds compatible, do the followings.
$> sudo raspi-config
Advanced Options -> PCIE Speed ->Choose “Yes” to enable Gen 3 mode
Select “Finish”. The system will ask for a reboot. Please go for reboot.
After reboot.
$> sudo apt install hailo-all
This command installs - Hailo Kernel device driver & firmware, HailoRT, Hailo Apps, libraries & rpicam-apps
$> sudo reboot #Do a reboot to take the changes in effect
$> hailortcli fw-control identify
The above command will result like these -
Executing on device: 0000:01:00.0
Identifying board
Control Protocol Version: 2
Firmware Version: 4.17.0 (release,app,extended context switch buffer)
Logger Version: 0
Board Name: Hailo-8
Device Architecture: HAILO8L
Serial Number: HLDDLBB234500054
Part Number: HM21LB1C2LAE
Product Name: HAILO-8L AI ACC M.2 B+M KEY MODULE EXT TMP
The last five lines ensures installation of the Hailo-8L board. Sometimes the Serial Number or the part
number is vacant but that’s no problem, the Hailo-8L AI board still works. Now lets test rpicam-hello…
$> rpicam-hello -t 10s #This will open rpicam for 10 seconds and then will stop
$> rpicam-hello -- help # This is help file
So now all necessary ingredients have been installed. Now lets install the rpicam-apps for Hailo AI accelerator.
$> sudo apt update && sudo apt install rpicam-apps
All set now for rolling up our sleeves and taste some high-speed object detection codes that have come with rpicam-apps. Here are some command line codes to taste!
$> rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov6_inference.json
$> rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov8_inference.json
$> rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov5_personface.json
[this code identifies face - you can use this for face counting]
$> rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov8_pose.json
[this code identifies pose - you can use this for fall detection]
Above codes are for directly using rpicam-hello functionality to play around YOLO AI models. OK now lets do some python coding
First download the directory to raspberry pi 5
After the repository is downloaded, go to the directory
$> cd hailo-rpi5-examples
In the directory you will file two files - setup_env.sh & requirements.txt
In Hailo-8L requires packages of particular versions if this is mixed up with common python packages, it may cause conflicts. To avoid this, it is recommended to use it’s own secluded environment, where you can install the special version of any package despite having a higher version already installed in common python package without creating any conflict because your environment is separate from that environment. Therefore first go to the special environment as suggested by the Hailo-8L package.
$> source setup_env.sh
Once you run this code the prompt will change.
(venv_hailo_rpi5_examples) bera@raspberrypi:~/hailo-rpi5-examples $
Inside this virtual environment run the next code.
(venv_hailo_rpi5_examples) bera@raspberrypi:~/hailo-rpi5-examples $ pip install -r requirements.txt
requirements.txt has few pip commands which can be run individually as well.
$>cat requirements.txt #Lets see what is there in the requirements.txt file
numpy<2.0.0
setproctitle
opencv-python
$> pip install numpy<2.0.0
$> pip install setproctitle
$> pip install opencv-python
Now to download the necessary Hailo models, run this code.
$> download_resources.sh
This will install additional “Hailo Applications Infrastructures” on Raspi-5, part of which is optional for our code.
$> git clone https://github.com/hailo-ai/hailo-apps-infra.git
$> pip install --force-reinstall -v -e . [optional]
The git command gives you more python codes to play with. The last command is absolutely optional. You may run it or leave it as at this point we will have all the required resources. Also to note that all these softwares are under the category of ‘open source softwares’.
At this point I can assure you that you have installed all the softwares for this project. Now we are left with playing with Hailo-8L and trying new python codes and from here we would build our measure_distance3.py code.
Two sample python codes are already there in the basic_pipelines sub directory - pipelines.py & pose_estimation.py, these codes can take data from pre defined resources like *.mp4 files, rpicam or USB camera like
$> python basic_pipelines/detection_simple.py -- input rpicam [data from camera]
$> python basic_pipelines/pose_estimation.py -- input test.mp4. [data from video file]
$> python basic_pipelines/detection_simple.py -- input /dev/video0 [data from USB cam]
$> python basic_pipelines/detection.py --help [Help file for this code]
For custom models with different labels, use the --labels-path flag to load your labels file (e.g., resources/barcode-labels.json).
$> python basic_pipelines/detection.py --labels-json resources/barcode-labels.json --hef-path resources/yolov8s-hailo8l-barcode.hef --input resources/barcode.mp4
Distance measurement theory:
Distance is measured by measuring & comparing the standard width of an object. Consider the principle of pin-hole camera - the proportional image of an object at infinity distance is inverted and becomes on the focal plane. If the object moves further away the image becomes smaller.
Now if we know the focal length of the camera lens, the standard width of object, image width then from the two similar triangles theory it’s easy to measure the distance of the object from the lens. At the beginning of code we have defined standard width of objects along with a default object width.
The segmentation model [yolov5n_seg_h8l_mz.hef] helps us to asses the full object even when only a segment of the object is seen in the image window. Therefrom, we assess the image width and from image width compare it with standard image width and derive the distance at which it is situated.
# Object width dictionary for distance estimation (in meters)
OBJECT_WIDTHS = {
"person": 0.4, "bicycle": 0.5, "car": 1.8, "motorcycle": 0.8, "bus": 2.5,
"truck": 2.5, "airplane": 36.0, "train": 3.2, "boat": 5.0, "traffic light": 0.6,
"fire hydrant": 0.3, "stop sign": 0.75, "cat": 0.3, "dog": 0.6, "horse": 1.2,
"cow": 1.5, "elephant": 3.2, "bear": 1.7, "zebra": 1.2, "giraffe": 2.0,
"bench": 1.2, "chair": 0.6, "couch": 2.0, "dining table": 1.8,
"laptop": 0.4, "tv": 1.2
}
DEFAULT_OBJECT_WIDTH = 0.5 # Default width for unknown objects
FOCAL_LENGTH = 0.5 # in meters Raspberry Pi Camera Module 3 Wide Spec
$> python measure_distance3.py —input rpicam #to get data from camera
$> python measure_distance3.py —input /dev/video0 # to get data from USB camera
About 80 class of objects are defined in the model yolov5n_seg_h8l_mz.hef. However, in our code we have considered only 26 distinct objects whose standard width is provided above and whose distance we are interested to find out.
Usage of distance monitoring: The combination of Raspberry Pi 5 and Hailo-8L AI HAT, powered by a reliable 5V 4W supply, is highly suited for stationary or embedded applications requiring real-time AI inference with distance estimation. This setup can be leveraged in the following scenarios:
1. Car Dashboard Monitoring: Mounted over the windshield, the system can detect vehicles and pedestrians ahead, estimate distances, and alert the driver in case of potential collisions or unsafe following distance. You can add a synthesiser code like espeak-ng for issue of voice advisory through speaker!
2. Reverse Gear Monitoring with Advisory: Installed at the rear of the car, it can detect and estimate the proximity of objects or people during parking, and provide visual/audio alerts or automatic braking triggers.
3. Smart Door Entry Monitoring: Installed at the entrance, it can detect approaching individuals, classify them (e.g., delivery person, known visitor, stranger), and estimate distance to enable automated doorbell activation, lighting, or even door unlocking.
4. Occupancy & Crowd Analysis in Buildings: Monitor people density and spacing in halls or waiting areas to trigger ventilation systems or crowd control alerts when individuals are too close or too many.
5. Warehouse Safety Systems: Used to detect forklifts or personnel in motion and estimate their distance to sensitive areas, triggering alarms or slowing down automated machinery in case of near-approach.
6. Robot Vision for Indoor Navigation: Used in AGVs (Automated Guided Vehicles) or indoor robots to identify obstacles and measure distances for safer and more efficient navigation.
7. School Zone or Gate Surveillance: Detect children or guardians at variable distances and provide alerts or gate automation based on crowd behavior.
8. Traffic Monitoring at Toll Booths or Bridges: Identify vehicle types and their distance for toll automation, overloading alerts, or traffic density analysis.
9. School bus gate monitoring: To monitor the incoming traffic & children movements near the school bus gate.
Prototype: [see pic]
Flickering Display issue: The Raspberry Pi 5 can output video to either a small TFT display (e.g., 5” or 7” with HDMI input) or a larger screen such as a 4K wall-mounted TV. While the display output is typically stable on a TV, smaller TFT screens may exhibit instability, such as flickering or blanking, especially during high-load tasks like AI inference. Typically a TFT / LCD display takes substantial power to light up, particularly the backlight takes good amount of power. Therefore, first ensure that you provide it’s power from a separate source not from the USB of the Raspberry Pi-5.
If the problem still persists you can modify the HDMI settings by editing the configuration file. Leaving every other lines intact, you may add these lines at the bottom of the file. Save the file and then reboot to take the changes to effect.
$> sudo nano /boot/firmware/config.txt
# Bera added these lines
[HDMI:0]
hdmi_force_hotplug=1
hdmi_group=2
hdmi_mode=87
hdmi_drive=2
hdmi_timings=1024 1 48 32 80 600 1 3 4 23 0 0 0 60 0 64000000 3
[HDMI:1] # Settings for your TV (optional, can be removed if unnecessary)
hdmi_group=1 # CEA mode (for TVs)
hdmi_mode=16 # 1080p @ 60Hz (adjust based on TV)
hdmi_drive=2
hdmi_force_hotplug=1
config_hdmi_boost=5
Aftermath: The Hailo-8L AI accelerator delivers a remarkable performance boost of 13 to 26 TOPS [latest one provides 26 TOPS] to the Raspberry Pi 5, making it competitive with devices like the NVIDIA Jetson Nano. However, the current software ecosystem and installation process are not as user-friendly, which may pose a challenge for beginners.
That said, this is clearly just the beginning. We can expect more such high-performance, low-power AI hardware in the near future—accompanied by improved software support and streamlined development tools.
The era of bulky, high-cost computing setups is fading. Compact, power-efficient, and AI-capable edge devices like these are fast becoming the new standard for intelligent computing at the edge.
Software: attached: measure_distance3.py
Bye,
Somnath Bera & Reon Samanta
Kolkata / West Bengal
Diskussion (0 Kommentare)