Document Type

Article

Publication Date

Spring 5-24-2020

Abstract

Micro-expressions are brief, subtle changes in facial expressions associated with emotional responses, and researchers have worked for decades on automatic recognition of them. As convolutional neural networks have been widely used in many areas of computer vision, such as image recognition and motion detection, it has also drawn the attention of scientists to use it for micro-expression recognition. However, none of them have been able to achieve an accuracy high enough for practical use. One of the biggest problems is the limited number of available datasets. The most popular datasets are SMIC, CASME, CASMEII, and SAMM. Most groups have worked on the datasets separately, but few have tried to combine them. In our approach, we combined the datasets and extracted the shared features. If new datasets under the same classifying rules (FACS) are created in the future, they can easily be combined using our approach. In addition to this novel approach for combining datasets, we use a new way of extracting the features instead of the Local Binary Pattern from Three Orthogonal Planes (LBP-TOP). To be more specific, we create shift matrices, the changing pattern of pixels, to keep the spatial information of the videos. Our highest recorded accuracy from 100 experiments was 88 percent, but we chose to report 72.5 percent. This is the median accuracy and a more convincing result though it’s a little bit lower than the best result to date. However, our f1 score is 72.3 percent and higher than the best result to date. Our paper presents an extendable approach to micro-expression recognition that should increase in accuracy as more datasets become available.

Comments

Thanks very much to anyone, especially Dr. Jason Yoder, who has helped me find out my interest in this object and gain this precious experience of researching.

Researchers have been trying to imitate human understanding of the world, including computer vision and natural language processing. We humans can infer others’ feelings from their facial expressions. For example, if people are smiling, then there is a high probability that they are happy. Therefore, I wondered if we can teach a robot to discriminate the emotions of people from their facial expressions. That was when I became interested in micro-expression recognition. After I read through some papers, it turned out that most researchers tested their approaches on the existing micro-expression datasets separately. However, all the datasets are so small that it is difficult to extract useful features. Therefore, I decided to pursue an extendable way of combining the datasets and set a standard for future work in this area as more datasets become available.

Share

COinS