2012-08-07

“Facial Expression Analysis Using a Sparse Representation Based Space Model” – ICSP2012 accepted


Facial Expression Analysis Using a Sparse Representation Based Space Model

Weifeng Liu1, Caifeng Song, Yanjiang Wang2
College of Information and Control Engineering, China University of Petroleum (East China), Qingdao, P.R. China
Abstract— With the development of information technologies, facial expression analysis becomes more and more essential to human computer interaction (HCI). A natural way to analysis facial expression is derived from the study on human emotion which is regarded as the intrinsic origin of facial expression. Another issue for facial expression analysis is to extract substaintial facial features that correspond with visual perception system. Based on these observations, we present a sparse representation based space model for facial expression analysis which applies Gabor filters to extract facial features. The sparse representation based facial expression space model is induced from human emotion space and then can describe mixture facial expressions which are usual in daily life. Experiments on JAFFE database demonstrate the validity of the proposed facial expression space model.
Keywords-facial expression analysis; space model; sparese representation
Gabor features used in the paper can be found here.

JAFFE Gabor Features


JAFFE Gabor Features

JAFFE database contains 213 images of 7 facial expressions (6 basic facial expressions + 1 neutral) posed by 10 Japanese female models. Each image has been rated on 6 emotion adjectives by 60 Japanese subjects. The original dataset can be found here.
In this processed dataset, the Gabor features are exptracted on the selected 122 facial points of which are normalized and cropped images of JAFFE of 120×96 with the eyes and nose of all images in the same horizontal lines respectively.
Gabor datafile is a .mat file that can be easily loaded in MatLab, and it contains some variables including ‘ per’, ‘exp’, ‘pos’, ‘gab1′ and ‘gab2′. The definition of the variables is as follows.
  • per:the label of each person(1-KA,2-KL,3-KM,4-KR,5-MK,6-NA,7-NM,8-TM,9-UY,10-YM)
  • exp:the label of each expression (1-AN,2-DI,3-FE,4-HA,5-NE,6-SA,7-SU)
  • pos:the position of selected facial points.
  • gab1:The Gabor feature of the extracted points which each row presents the Gabor feature of one expression image.(Amplitude of Gabor feature is used here. five scales and eight orientations of Gabor nuclears are used)
  • gab2:The Gabor feature of the extracted points after nomalizing each vector.(Amplitude of Gabor feature is used here)
We appreciate it very much if you can cite the following paper when you use the processed data sets.
  1. Weifeng Liu, Caifeng Song, Yanjiang Wang, Facial Expression Recognition Based on Discriminative Dictionary Learning, ICPR2012
  2. Weifeng Liu, Caifeng Song, Yanjiang Wang, Lu Jia, Facial Expression Recognition Based on Gabor Features and Sparse Representation, ICARCV2012