Home > Your Position : News > Industry news

Introduction of Driver Monitoring System DMS Algorithm Logic

Date:2023-10-28 Author: Number:1186

Introduction of Driver Monitoring System DMS Algorithm Logic

Driver monitoring system (DMS) is generally for L2-L3 level auto drive system, but it is meaningless for L4 level, unless the system is still a test link requiring safety officer.

The purpose of monitoring is to detect driver distraction, fatigue, or drowsiness, and even unexpected situations where they are unable to drive, such as deceiving the driver assistance system to use mineral water instead of their hands on the steering wheel, or arguing and fighting with passengers. Additionally, as part of the development phase of autonomous driving, monitoring drivers can provide first-hand data on driving behavior, even for use in simulation systems.

Non intrusive methods are the preferred method for monitoring, while vision based systems are more attractive. The main visual cues include facial features, hand features, or body features. Many detection systems only use a single visual cue, which has poor robustness, such as being easily disturbed when occlusion or lighting changes occur. So combining multiple visual cues is the key and also challenging.

A driver facial monitoring system is a real-time system based on driver facial image processing to study the driver's physical and mental condition. The driver's state can be detected from factors such as eyelid closure, blinking, gaze direction, yawning, and head movement. It can be basically divided into two categories:

1) Detect the driver from the eye area only.

2) It not only can detect from the eyes, but also can detect from other areas of the face and head.

The following diagram is a driver's facial monitoring system diagram: detecting faces, as well as eye and other facial features, while tracking changes, extracting symptoms, and achieving fatigue and distraction detection. The main challenge of the driver facial monitoring system is the following 2 points.

image.png

(1) "How to measure fatigue?"

The first challenge is how to accurately define fatigue and how to measure fatigue; There is a relationship between fatigue and body temperature, skin resistance, eye movement, respiratory rate, heart rate, and brain activity; The first and most important sign of fatigue will appear in the eyes.

(2) "How to measure attention?"

The second challenge is to measure the driver's attention to the road; The driver's attention can be estimated from the driver's head and gaze direction.

Facial detection methods can refer to general object detection methods, and now deep learning has also demonstrated "muscle" in this field. Facial detection is an old problem, and the challenges faced by facial detection include the following:

— In-plane rotation;

— Out of plane rotation;

— The presence of cosmetics, beard, and glasses;

— Expression (happy, crying, etc.);

— Lighting conditions;

— Facial occlusion;

— Real time processing requirements.

The eye area is always used first for driver symptom extraction, as the most important psychological activity is related to eye activity.

There are two main categories of eye detection:

1) Method based on infrared spectral imaging;

2) Visual based methods;

In addition to the eyes, other facial components can also be detected: mouth, nose, and salient points on the face.

Facial tracking is the main means of analyzing the driver's psychological activity. This tracking task is similar to general single target tracking, and the main challenges include:

The mapping from three-dimensional space to two-dimensional space causes some information to be lost;

— Having complex shapes or movements;

— Partial occlusion;

— Changes in ambient lighting;

— Real time tracking requirements.

Symptom extraction related to fatigue, distraction, and drowsiness includes:

1) Symptoms related to the eye area: closed eyes, distance between eyelids, fast blinking speed, gaze direction, and jumping movements;

2) Symptoms related to the mouth area: open/closed;

3) Symptoms related to the head: nodding, unchanged head posture, and head fixation;

4) Facial related symptoms: mainly facial expressions.

Here are a few examples:

The figure shows a driver monitoring system based on deep neural networks (DNNs).

·

image.png

The detection network structure based on facial, binocular, and mouth regions is as follows:

image.png

The detection network structure based on monocular (left eye) region and mouth region is as follows.

image.png

·

The figure shows a facial expression recognition system based on deep learning models: the input image detects the face and features, extracts spatiotemporal features from facial components, and uses a pretrained classifier (the image is taken from the CK+data set (d)) to determine the expression.

Note:

·

image.png

·

The entire deep learning model combines CNN and LSTM, as shown in the following figure

image.png

·

The following image shows a system for identifying driver distraction symptoms through body posture. The symptoms include: drinking alcohol, adjusting the radio, driving in the correct position, manipulating hair or cosmetics, facing back, talking to passengers, using the left hand to make phone calls, using the right hand to make phone calls, using the left hand to send text messages, and using the right hand to send text messages.

·

image.png

·

The algorithm diagram of the system is shown in the figure, including facial detector, hand detector, and skin region segmentation. For each output image (i.e. skin, face, hand), train Alex Net and InceptionV3 networks (5 Alex Net and 5 InceptionV3), and finally recognize a weighted combination output.

·

image.png



C
H
A
T
易景信息在线


销售经理: 点击这里给我发消息                        

erweima.jpg

留言内容