dc.description.abstract | Recently, researchers have put a lot of efforts on the recognition of facial expressions. The goal of the thesis is to develop an automatic facial expression recognition system to automatically perform human face detection, feature extraction and facial expression recognition after the images are faded. Via the use of the automatic human face detection, the region of facial features and the optical flow tracking algorithm, we can construct an automatic facial expression recognition system to achieve our goal.
Most of the traditional facial expression systems are first to look for a way to automatically track some facial feature points (ex: canthus, eyebrows, and mouth) and then recognize expressions based on these extracted facial features. But experimental results exhibited that the facial features cannot always be obtained reliably because of the quality of images, illumination, and some other disturbing factors. Some properties of images contribute a lot of errors or bias and cost a lot of process time to overcome them if possible. Although the clear features can make a lot of contribution on the performance, we can also feel the changes of facial expression according to some slight muscle variations of facial area. So the way to utilize some specified feature regions and the uniform-distributed feature points is used to for the facial expression from the motion of these feature points.
After a series of images are derived, according to the proposed idea, the first frame is used to perform human face detection, and get the three feature regions (eyes and mouth) by their geometrical ratio relationships. To increase the accuracy of locating feature regions, the Sobel edge detection incorporated with the horizontal projection is used. After three feature regions have been located 84 feature points are uniformly distributed in the specified feature regions. Then we use the optical flow algorithm to track these 84 feature points on the following image series. Therefore, 84 facial motion vectors can be derived from the tracking procedure. Then the facial expression recognition is based on these 84 facial motion vectors. The facial recognition procedures involves in two stages. At the first stage, three multi-layer perceptrons are trained to recognize the action units in the eyebrow, the eye and the mouth regions. Then another five single-layer perceptrons are used to recognize the facial expressions based on the outputs computed from the aforementioned three MLPs. Experiments were conducted to test the performance of the proposed facial recognition system. | en_US |