Open Access Open Access  Restricted Access Subscription or Fee Access

An Efficient Algorithm for Human Emotion Classification

D. Magdalene Delighta Angeline, G. Parthiban, K. Utchimakali, V. Gideon Augustin

Abstract


Face emotion is required in many applications like eye-gaze tracking, iris detection, video conferencing, auto-stereoscopic displays, face detection and face recognition. This paper proposes a novel technique for eye detection using color and morphological image processing. It is observed that eye regions in an image are characterized by low illumination, high density edges and high contrast as compared to other parts of the face. The method proposed is based on assumption that a frontal face image (full frontal) is available. Firstly, the skin region is detected using a color based training algorithm and six-sigma technique operated on RGB, HSV and NTSC scales. Further analysis involves morphological processing using boundary region detection and detection of light source reflection by an eye, commonly known as an eye dot. This gives a finite number of eye candidates from which noise is subsequently removed. This technique is found to be highly efficient and accurate for detecting eyes in frontal face images.

Keywords


Colour, Content Based Image Retrieval, Gray Scale, Phong Shading, Texture.

Full Text:

PDF

References


Z. Zeng, M. Pantic, G. Roisman, and T. Huang, “A survey of affect recognition methods: Audio, visual, and spontaneous expressions,” IEEE Trans. Pattern Anal. Mach. Intell., no. 1, pp. 39–58, Jan. 2009.

D. Seppi, A. Batliner, B. Schuller, S. Steidl, T. Vogt, J.Wagner, L. Devillers,L. Vidrascu, N. Amir, and V. Aharonson, “Patterns, prototypes,performance: Classifying emotional user states,” in Proc. InterSpeech, 2008, pp. 601–604.

C. Busso and S. Narayanan, “Recording audio-visual emotional databases from actors: A closer look,” in Proc. 2nd Int. Workshop Emotion: Corpora for Research on Emotion and Affect, Int. Conf. Lang.Res. Eval. (LREC 2008),Marrakech, Morocco, May 2008, pp. 17–22.

F. Enos and J. Hirschberg, “A framework for eliciting emotionalspeech: Capitalizing on the actor‟s process,” in Proc. 1st Int. Workshop Emotion: Corpora for Research on Emotion and Affect (Int. Conf.Lang. Res. Eval. (LREC 2006)), Genoa, Italy, May 2006, pp. 6–10.

C. Busso, S. Lee, and S. Narayanan, “Using neutral speech models for emotional speech analysis,” in Proc. InterSpeech, Antwerp, Belgium,Aug. 2007, pp. 2225–2228.

N. Tsapatsoulis, A. Raouzaiou, S. Kollias, R. Cowie, and E. Douglas- Cowie, “Emotion recognition and synthesis based on MPEG-4 fap‟s,” in MPEG-4 Facial Animation: The Standard, Implementation, and

Applications,I. S. Pandzic and R. Forchheimer, Eds. New York: Wiley, 2002, ch. 9, pp. 141–167

T. M. Mitchell, Machine Learning. New York: McGraw-Hill, 1997. [31] I.Witten, E. Frank, L. Trigg, M. Hall, G. Holmes, and S. Cunningham, “Weka: Practical machine learning tools and techniques with Java implementations,”in Proc. ANNES Int. Workshop Emerging Eng. Connectionnist-Based Inf. Syst., Dunedin, New Zealand, 1999, vol. 9, pp.192–196

A. Samal and P.A. Iyengar, “Automatic Recognition and Analysis of Human Faces and Facial Expressions: A Survey,” Pattern Recognition, vol. 25, no. 1, pp. 65-77, 1992.

B. Fasel and J. Luttin, “Automatic Facial Expression Analysis: a survey,” Pattern Recognition, vol. 36, no. 1, pp. 259-275, 2003.


Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.