Simplified Distracted Driving Detection with Facial Keypoints

Location

Session 1 (Room 1308)

Session Format

Oral Presentation

Your Campus

Statesboro Campus- Henderson Library, April 20th

Academic Unit

Department of Electrical and Computer Engineering

Co-Presenters and Faculty Mentors or Advisors

Faculty Mentor: Dr. Rami Haddad

Abstract

According to the US National Highway Traffic Safety Administration, distracted driving was the primary cause of 3,142 fatalities in 2019. Distracted driving can be initiated by both objects and people inside the car, the current surroundings, and other unpredictable events. Therefore, it becomes necessary to develop a proactive approach to detect when a driver is not focusing on sensible objects and vehicles on the road. For this detection system to be feasible, it must be intuitive and non-invasive. Computer vision, a subset of deep learning, provides methods for computer systems to mimic humans in perceiving data from digital imaging. Previous work in distracted driving detection with computer vision has primarily focused on classification of the entire image, which allows for detection based on body positions and objects in the frame. However, this does not fully isolate the human subject from the background and has possibilities for false positives in certain situations. To improve upon these methods, a different approach can be chosen to better interpret body and head poses. Keypoint detection is a type of computer vision model that is capable of plotting points on prominent features of the human body using only a digital camera image. This allows deep learning programs to further isolate humans for a more accurate system. A rules-based algorithm with Euclidean distance normalization between facial keypoints was developed to determine if driver focus deviates from looking forward while driving. This algorithm also incorporates steering angle to eliminate false positive detections when looking left and right in acceptable turning situations. Future work will implement additional vehicle data, different camera types and new forms of visual perception for increased robustness.

Program Description

Simplified distracted driving detection algorithm utilizing the OpenPose human keypoint detection model and Euclidean distance normalization. Interprets vehicle data for a more realistic and robust system.

Creative Commons License

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.

Presentation Type and Release Option

Presentation (Open Access)

Start Date

4-20-2022 11:00 AM

End Date

4-20-2022 12:00 PM

This document is currently not available here.

Share

COinS
 
Apr 20th, 11:00 AM Apr 20th, 12:00 PM

Simplified Distracted Driving Detection with Facial Keypoints

Session 1 (Room 1308)

According to the US National Highway Traffic Safety Administration, distracted driving was the primary cause of 3,142 fatalities in 2019. Distracted driving can be initiated by both objects and people inside the car, the current surroundings, and other unpredictable events. Therefore, it becomes necessary to develop a proactive approach to detect when a driver is not focusing on sensible objects and vehicles on the road. For this detection system to be feasible, it must be intuitive and non-invasive. Computer vision, a subset of deep learning, provides methods for computer systems to mimic humans in perceiving data from digital imaging. Previous work in distracted driving detection with computer vision has primarily focused on classification of the entire image, which allows for detection based on body positions and objects in the frame. However, this does not fully isolate the human subject from the background and has possibilities for false positives in certain situations. To improve upon these methods, a different approach can be chosen to better interpret body and head poses. Keypoint detection is a type of computer vision model that is capable of plotting points on prominent features of the human body using only a digital camera image. This allows deep learning programs to further isolate humans for a more accurate system. A rules-based algorithm with Euclidean distance normalization between facial keypoints was developed to determine if driver focus deviates from looking forward while driving. This algorithm also incorporates steering angle to eliminate false positive detections when looking left and right in acceptable turning situations. Future work will implement additional vehicle data, different camera types and new forms of visual perception for increased robustness.