Pi-5+Hailo8L AI powered Fall identification & Recording project
Prelude: Imagine an Apple Watch with a fall detection feature that sends an SMS alert when the wearer experiences a fall. While this story circulates on social media highlighting the watch’s reliability, there’s one catch — the wearer must remember to wear it, especially when a fall is likely.
Now, enter a more proactive and hands-free solution: the Hailo-8L AI accelerator, integrated with a Raspberry Pi 5. This system harnesses a Neural Processing Unit (NPU) capable of delivering 13 to 26 TOPS of AI performance, depending on the Hailo-8L version. With such processing power, real-time image analysis at 30 frames per second is easily achievable.
Using the YOLOv8 pose estimation model optimized for the Hailo-8L, Analyze the anatomical posture of the human body to determine potential fall events. When a fall is detected based on pose inference, the system automatically:
• Captures a snapshot of the incident,
• Logs the event with a timestamp,
• Triggers a GPIO-based alarm, and
• Saves the image with meta data of the fall in a designated logs/ directory.
• The device can send SMS / Whatsapp message to pre selected numbers using Twilio
This edge AI-based fall detection system operates autonomously — no wearables required, just intelligent monitoring and the rest is achieved by vision processing is computationally intensive. That’s where the Hailo-8L shines, capable of delivering up to 13 TOPS (Tera Operations Per Second), it handles the heavy lifting of object detection and segmentation. The Raspberry Pi 5, in turn, manages calculations, logic, and control. Together, they form a fast, efficient & intelligent perception system — well-suited even for real-time operation at 30 fps.
Twilio Messaging: A brief howtos of using Twilio for messaging is here. First set up a free Twilio account, go to https://www.twilio.com/try-twilio, sign up with your email, and verify your phone number. After logging in, Twilio gives you a free trial balance and access to a Trial SID, Auth Token, and sandbox numbers for SMS and WhatsApp.
For WhatsApp, activate the sandbox in the Twilio Console under Messaging. Add the WhatsApp number in your mobile. Open the number in WhatsApp and send a message “join bill-loose” to this WhatsApp number then follow the on-screen instructions to join via your personal WhatsApp. For SMS, use the trial phone number assigned to your account to send messages to verified numbers (you must verify any number before sending to it in trial mode). Use any script as shown on the side panel (Python, PHP, java, C#, curl, Ruby or Node.js) with your credentials [sidaccount_sid & auth_token ] to send a message via Twilio’s API endpoints.
A caution with Twilio: If you send repeated messages to a large extent and in rapid succession, Twilio free account may be blocked / suspended or they may ask you to buy credentials. While whatsapp has limited free messages, the SMS is having large margins to play with. So far I have not crossed any limit. But the messaging should be made on when everything else works OK.
This project: leverages an AI Accelerator HAT on the Raspberry Pi 5 to develop a production-grade fall detection system capable of running at a minimum of 30 frames per second. Designed for deployment in industrial environments—such as shop floors, conveyor belts, busy road intersections, or spectator stands—it ensures that no fall incident goes unnoticed. The system continuously monitors the area, detects fall events, captures snapshots of the moment, and triggers an audible alarm via a GPIO-controlled switch.
BOM:
- Raspberry Pi -5 8GB or 16GB [robu.in / alibaba.com INR:8K to 13K]
- Hailo-8L AI Accelerator for Pi-5 13 TOPS to 26 TOPS [robu.in / alibaba.com INR:7K to 11K]
- Pi-5 Active cooler - [robu.in INR:500]
- Pi-5 Camera [Camera Module 3 or HQ USB camera [robu.in / alibaba.com / amazon.in : INR 3K]
- Pi-5 power adapter etc.
- Mini tripod for Raspberry Pi camera [optional]
Note: Since the AI HAT fully encloses the main processor, the Raspberry Pi 5 tends to heat up significantly. To prevent overheating, I strongly recommend using additional cooling [serial no 3]. In our setup, only a specific fixed-type cooler fits the narrow space available once the AI HAT is mounted. The complete setup and individual components are shown below. Also, ensure that the Pi-5 is powered by a high-quality 5V, 4A to 5A power adapter to maintain stable performance.
With active cooling, you can safely overclock the Raspberry Pi 5 to extract additional performance—at the cost of slightly increased power consumption and heat generation. The fan connector from the cooler should be plugged into the designated fan header on the Pi-5 board. When paired with the AI HAT, the heavy-lifting of YOLO-based AI processing is handled by the accelerator, while the Pi-5 focuses on computations and control logic. This division of labor enables smooth and efficient operation, even at 30 or more frames per second!
Necessary Software installation & preparation for our code:
First ensure installation of a 64-bit Raspberry Pi OS Bookworm software. This project will not work on 32 bit Pi OS. We recommend a good quality camera module 3 or better to be used.
$> sudo apt-get update
$> sudo apt-get upgrade
The most convenient way to operate a Raspberry Pi computer is by accessing it from your desktop/laptop
using ssh command, ssh bera@192.168.x.x when it asks for password please provide the password and
you will be inside the Raspi-5. However, this is not absolute necessary. You can do by connecting
a display, keyboard & mouse with your Raspi-5
By default, Pi-5 uses Gen 2.0 speeds. To make it Gen 3.0 speeds compatible, do the followings.
$> sudo raspi-config
Advanced Options -> PCIE Speed ->Choose “Yes” to enable Gen 3 mode
Select “Finish”. The system will ask for a reboot. Please go for reboot.
After reboot.
$> sudo apt install hailo-all
This command installs - Hailo Kernel device driver & firmware, HailoRT, Hailo Apps, libraries & rpicam-apps
$> sudo reboot #Do a reboot to take the changes in effect
$> hailortcli fw-control identify
The above command will result like these -
Executing on device: 0000:01:00.0
Identifying board
Control Protocol Version: 2
Firmware Version: 4.17.0 (release,app,extended context switch buffer)
Logger Version: 0
Board Name: Hailo-8
Device Architecture: HAILO8L
Serial Number: HLDDLBB234500054
Part Number: HM21LB1C2LAE
Product Name: HAILO-8L AI ACC M.2 B+M KEY MODULE EXT TMP
The last five lines ensures installation of the Hailo-8L board. Sometimes the Serial Number or the part
number is vacant but that’s no problem, the Hailo-8L AI board still works. Now lets test rpicam-hello…
$> rpicam-hello -t 10s #This will open rpicam for 10 seconds and then will stop
$> rpicam-hello -- help # This is help file
So now all necessary ingredients have been installed. Now lets install the rpicam-apps for Hailo AI accelerator.
$> sudo apt update && sudo apt install rpicam-apps
All set now for rolling up our sleeves and taste some high-speed object detection codes that have come with rpicam-apps. Here are some command line codes to taste!
$> rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov6_inference.json
$> rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov8_inference.json
$> rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov5_personface.json
[this code identifies face - you can use this for face counting]
$> rpicam-hello -t 0 --post-process-file /usr/share/rpi-camera-assets/hailo_yolov8_pose.json
[this code identifies pose - you can use this for fall detection]
Above codes are for directly using rpicam-hello functionality to play around YOLO AI models. OK now lets prepare for python coding
First download the directory to raspberry pi 5
After the repository is downloaded, go to the directory
$> cd hailo-rpi5-examples
In the directory you will file two files - setup_env.sh & requirements.txt
In Hailo-8L requires packages of particular versions if this is mixed up with common python packages, it may cause conflicts. To avoid this, it is recommended to use it’s own secluded environment, where you can install the special version of any package despite having a higher version already installed in common python package without creating any conflict because your environment is separate from that environment. Therefore first go to the special environment as suggested by the Hailo-8L package.
$> source setup_env.sh
Once you run this code the prompt will change.
(venv_hailo_rpi5_examples) bera@raspberrypi:~/hailo-rpi5-examples $
Inside this virtual environment run the next code.
(venv_hailo_rpi5_examples) bera@raspberrypi:~/hailo-rpi5-examples $ pip install -r requirements.txt
requirements.txt has few pip commands which can be run individually as well.
$>cat requirements.txt #Lets see what is there in the requirements.txt file
numpy<2.0.0
setproctitle
opencv-python
$> pip install numpy<2.0.0
$> pip install setproctitle
$> pip install opencv-python
Now to download the necessary Hailo models, run this code.
$> download_resources.sh
[Important to remember that you have to start the virtual environment before installing these pip modules]
This will install additional “Hailo Applications Infrastructures” on Raspi-5, part of which is optional for our code.
$> git clone https://github.com/hailo-ai/hailo-apps-infra.git
$> pip install --force-reinstall -v -e . [optional]
The git command gives you more python codes to play with. The last command is absolutely optional. You may run it or leave it as at this point we will have all the required resources. Also to note that all these softwares are under the category of ‘open source softwares’.
At this point I can assure you that you have installed all the softwares for this project. Now we are left with playing with Hailo-8L and trying new python codes and from here we would build our final code.
(venv_hailo_rpi5_examples) bera@raspberrypi:~/hailo-rpi5-examples/hailo-apps-infra-main/hailo_apps_infra $
$>~/hailo-rpi5-examples/ ## run the virtual environment command. Then do a $> cd hailo-apps-infra-main/hailo_apps_infra & you will reach to the above directory where we are going to create our code.
At this point sample python codes are already there in the basic_pipelines sub directory - pipelines.py & pose_estimation.py, these codes can take data from resources like *.mp4 files, rpicam or USB camera like
$> python basic_pipelines/detection_simple.py -- input rpicam [data from camera]
$> python basic_pipelines/pose_estimation.py -- input test.mp4. [data from video file]
$> python basic_pipelines/detection_simple.py -- input /dev/video0 [data from USB cam]
$> python basic_pipelines/detection.py --help [Help file for this code]
fall_led_pic.py: is the core script responsible for detecting falls. It takes input from either a recorded video file or a live feed via the Raspberry Pi camera or a USB camera. Upon detecting a fall, the script activates an alert using GPIO-controlled LEDs:
• ledg (green LED) indicates an OK condition — it remains ON when no fall is detected.
• ledr (red LED) turns ON when a fall is identified, signalling an emergency.
This LED behavior is especially useful in real-time environments like shop floors or conveyor systems, where fall events are critical and must not go unnoticed. In such cases, ledr stays ON as long as the fall condition persists. However, during testing on pre-recorded videos, the fall status may change quickly in subsequent frames. The script also performs two key logging actions:
• (A) Appends to a log entry with the detected object’s ID and timestamp to a text file.
• (B) Captures & annotates a snapshot of the fall moment, saves it in save_log.
Now lets break up the code - On top of the code few normal libraries are imported. Below two important imports are shown -
from hailo_apps_infra.pose_estimation_pipeline import GStreamerPoseEstimationApp
… This code loads & applies the pose estimation model, which is pre-compiled into a .hef format (used internally by Hailo). This also defines how the display output will be rendering after processing.
from hailo_apps_infra.hailo_rpi_common import (
get_caps_from_pad,
get_numpy_from_buffer,
app_callback_class,
)
get_caps_from_pad: this code is equivalent for Hailo Raspberry Pi python SDK infrastructure kit. This extracts video format information - resolution and pixel format from a GStreamer pad and decode/process each incoming video frame.
get_numpy_from_buffer(buffer, format, width, height): Converts incoming video frame buffer into a NumPy array (used for OpenCV processing). This bridges the gap between GStreamer video handling and Python-based frame analysis.
app_callback_class: This is the base utility class that is used by the app_callback of the code. It tracks runtime informations across frames during the GStreamer pipeline execution. This manages frame-related state like frame count, current frame buffer, or control flags (e.g., use_frame).
Fall detection process: is_fall_detected(keypoints_dict, threshold=0.5, y_range=30): This is where the fall is detected using various pose geometry of human body. A body is divided into 17 poses and when a body falls the position of these 17 points take different positions out of which two fall positions we have defined.
• First logic: It groups keypoints into regions: head, shoulders, hips, knees, ankles.
For each region, it calculates the average Y-position of available, confident keypoints. Then it compares these Y-values. If at least 3 regions exist and
any 2 of them are within y_range (~30 pixels) vertically, the body is
considered “flat.” If this flat layout is detected → flat_condition_met = True
• Second logic: Finds bounding box (min/max X and Y) & computes it’s width and height.
If height <= width, the posture is likely horizontal (lying down). If this
aspect ratio is met, it suggests lying → bbox_condition_met = True
Finally inside the is_fall_detected(keypoints_dict): The logics are checked, if satisfied, the text file is updated, the fall picture is taken and saved besides the GPIOs are updated to on & off.
The saved imaged can be either of - only black & white, color image or color but reduced size image. Just decide your choice & remove the comment mark and make the line on.
# Save resized image
success = cv2.imwrite(image_path, resized_frame) # color half frame size
#success = cv2.imwrite(image_path, frame_bgr) # color picture
#success = cv2.imwrite(image_path, frame) # B & W picture
Usage of fall detection project: The combination of Raspberry Pi 5 and Hailo-8L AI HAT, powered by a reliable 5V 4W supply, is highly suited for stationary or embedded applications requiring real-time AI inference with fall detection in Industrial setup. This setup can be leveraged in the following plethora of scenes:
1. Industrial & Workplace Safety:
• Factory Floor Monitoring: Detect worker falls near heavy machinery or hazardous
zones to trigger immediate alerts or shutdowns.
•Warehouse Safety:Identify accidental slips or collapses among forklift operators or
stock handlers to improve response time.
• Power Plant & Utility Sites: Monitor lone technicians working in remote or dangerous
areas for emergencies. These areas include Coal handling plant, DM water plant, Raw
Water treatment plant, Boiler drum floor, Boiler firing floor, Sky climber area etc.
2. Healthcare & Assisted Living:
•Elderly Care Facilities:Automatically detect falls in senior living homes or hospital
wards, especially during unattended hours.
• In-Home Patient Monitoring: Integrate with smart home systems to monitor elderly or
recovering patients and notify caregivers instantly.
3. Security & Surveillance:
• Remote Surveillance Posts: Use AI to monitor guards or personnel stationed at
isolated checkpoints or entry zones.
•Data Centers or Control Rooms: Alert authorities if a staff member collapses during
overnight shifts or during emergencies.
4. Construction & Infrastructure:
• High-Risk Construction Sites: Deploy for monitoring workers at heights or
scaffoldings for instant fall detection or underground miners while digging.
• Bridge or Tunnel Maintenance: Monitor safety of inspectors in low-access,
dangerous zones.
5. Embedded & Edge AI Applications:
• Smart Gate Control Systems: Automatically stop gate movement or machinery if a fall
is detected in the operational zone.
• Robotics & Automation Safety Nets: Equip robots with a visual safety system to detect
and respond to human falls nearby.
Prototype:
Some fall moments extracted by the code:
[see pictures]
Log entry view:
[2025-05-24_22-22-14]
Diskussion (0 Kommentare)