Current approaches to facial expression classification employ a variety of expression classes and different preprocessing steps, making comparison of results difficult. To outline the effects of these variations we explore several image and action preprocessing steps, using the discrete expressions: happy, sad, surprised, fearful, angry, disgusted and neutral; with a dataset aligned and normalised by our proposed face model. Each of the preprocessing steps is organised across four prominent approaches: holistic, holistic action, component and component action. These are compared using a modified multiclass Support Vector Machine (SVM) that uses pairwise adaptive model parameters. We illustrate that including the neutral expression as part of the study has a noticeable impact, and suggest that it should be used in future research in this area. We also show that results can be improved through innovative use of image and action preprocessing steps. Our best correct classification rate was 98.33% using 10-fold cross validation and a component action approach.
History
Source title
Proceedings of the IEEE World Congress on Computational Intelligence 2010
Name of conference
2010 IEEE World Congress on Computational Intelligence (WCCI 2010)
Location
Barcelona, Spain
Start date
2010-07-18
End date
2010-07-23
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Place published
Piscataway, NJ
Language
en, English
College/Research Centre
Faculty of Engineering and Built Environment
School
School of Electrical Engineering and Computer Science