Research Article | OPEN ACCESS
Object Analysis of Human Emotions by Contourlets and GLCM Features
R. Suresh and S. Audithan
Department of Computer Science and Engineering, PRIST University, Tanjore, Tamilnadu, India
Research Journal of Applied Sciences, Engineering and Technology 2014 7:856-862
Received: June 08, 2014 | Accepted: July 19, 2014 | Published: August 20, 2014
Abstract
Facial expression is one of the most significant ways to express the intention, emotion and other nonverbal messages of human beings. A computerized human emotion recognition system based on Contourlet transformation is proposed. In order to analyze the presented study, seven kind of human emotions such as anger, fear, happiness, surprise, sadness, disgust and neutral of facial images are taken into account. The considered emotional images of human are represented by Contourlet transformation that decomposes the images into directional sub-bands at multiple levels. The features are extracted from the obtained sub-bands and stored for further analysis. Also, texture features from Gray Level Co-occurrence Matrix (GLCM) are extracted and fused together with contourlet features to obtain higher recognition accuracy. To recognize the facial expressions, K Nearest Neighbor (KNN) classifier is used to recognize the input facial image into one of the seven analyzed expressions and over 90% accuracy is achieved.
Keywords:
Contourlet transform , emotion recognition , facial expression , nearest neighbor classifier,
References
-
Anderson, K. and W.M. Peter, 2006. A real-time automated system for the recognition of human facial expressions. IEEE T. Syst. Man Cy. B, 36(1): 96-105.
CrossRef -
Balasubramani, A., K. Kalaivanan, R.C. Karpagalakshmi and R. Monikandan, 2008. Automatic facial expression recognition system. Proceeding of the International Conference on Computing, Communication and Networking, pp: 1-5.
CrossRef -
Bamberger, R.H. and M.J.T. Smith, 1992. A filter bank for the directional decomposition of images: Theory and design. IEEE T. Signal Proces., 44(4): 882-893.
CrossRef -
Chakraborty, A., K. Amit, K.C. Uday and C. Amita, 2009. Emotion recognition from facial expressions and its control using fuzzy logic. IEEE T. Syst. Man Cy. A, 39(4): 726-743.
CrossRef -
Chang, Y., C. Lien and L. Lin, 2009. A new appearance-based facial expression recognition system with expression transition matrices. Proceedings of the 3rd International Conference on Innovative Computing Information and Control, pp: 538.
-
Chi-Ting, H., H. Shih-Chung and H. Chung-Lin, 2013. Facial expression recognition using Hough forest. Proceeding of the Signal and Information Processing Association Annual Summit and Conference (APSIPA, 2013), pp: 1-9.
-
Ching, Y.L. and C.L. Li, 2008. Recognition of facial expression by using neural-network system with fuzzified characteristic distances weights. Proceedings of the IEEE International Conference on Fuzzy Systems, pp: 1694-1699.
-
Do, M.N. and M. Vetterli, 2005. The contourlet transform: An efficient directional multi resolution image representation. IEEE T. Image Process., 14(12): 2091-2106.
CrossRef PMid:16370462 -
Franco, L. and T. Alessandro, 2001. A neural network facial expression recognition system using unsupervised local processing. Proceeding of the 2nd International Symposium on Image and Signal Processing and Analysis, pp: 628-632.
CrossRef -
Guo, S.M., Y.A. Pan, L. Yueh-Ching, C.Y. Hsu, S. H.T. Jason and C.I. Chang, 2006. A key frame selection-based facial expression recognition system. Proceeding of the 1st International Conference on Innovative Computing, Information and Control, 3: 341-344.
-
Haralick, R.M., S. Karthikeyan and D. Its'Hak, 1973. Textural features for image classification. IEEE T. Syst. Man Cyb., SMC-3(6): 610-621.
CrossRef -
Jyh-Yeong, C. and C. Jia-Lin, 1999. A facial expression recognition system using neural networks. Proceedings of the International Joint Conference on Neural Networks, 5: 3511-3516.
CrossRef -
Jyh-Yeong, C. and C. Jia-Lin, 2001. Automated facial expression recognition system using neural networks. J. Chinese Inst. Eng., 24(3): 345-356.
CrossRef -
Lu, Y. and M.N. Do, 2003. Crisp-contourlets: A critically sampled directional multiresolution image representation. Proceeding of SPIE Conference on Wavelet Applications in Signal and Image Processing, pp: 655-665.
CrossRef -
Luo, R.C., H.L. Pei, C.W. Yen and C.Y. Huang, 2012. Dynamic face recognition system in recognizing facial expressions for service robotics. Proceedings of the ASME International Conference on Advanced Intelligent Mechatronics (AIM), pp: 879-884.
CrossRef -
Martin, C., W. Uwe and H.M. Gross, 2008. A real-time facial expression recognition system based on active appearance models using gray images and edge images. Proceedings of the 8th IEEE International Conference on Automatic Face and Gesture Recognition, pp: 1-6.
CrossRef -
Mersereau, R.M., W. Mecklenbrauker and T. Quatieri, 1976. McClellan transformations for two-dimensional digital filtering-Part I: Design. IEEE T. Circuits Syst., 23(7): 405-414.
CrossRef -
Poursaberi, A., Y. Svetlana and G. Marina, 2013. An efficient facial expression recognition system in infrared images. Proceeding of the 4th International Conference on Emerging Security Technologies (EST, 2013), pp: 25-28.
CrossRef -
Sarawagi, V. and K.V. Arya, 2013. Automatic facial expression recognition for image sequences. Proceeding of the 6th International Conference on Contemporary Computing (IC3), pp: 278-282.
CrossRef -
Song, K.T. and C. Yi-Wen, 2011. A design for integrated face and facial expression recognition. Proceeding of the 37th Annual Conference on IEEE Industrial Electronics Society, pp: 4306-4311.
Competing interests
The authors have no competing interests.
Open Access Policy
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Copyright
The authors have no competing interests.
|
|
 |
ISSN (Online): 2040-7467
ISSN (Print): 2040-7459 |
 |
Information |
|
|
|
Sales & Services |
|
|
|