Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.
Digital Object Identifier (DOI) : 10.14569/IJARAI.2015.040204
Article Published in International Journal of Advanced Research in Artificial Intelligence(IJARAI), Volume 4 Issue 2, 2015.
Abstract: For robots to plan their actions autonomously and interact with people, recognizing human emotions is crucial. For most humans nonverbal cues such as pitch, loudness, spectrum, speech rate are efficient carriers of emotions. The features of the sound of a spoken voice probably contains crucial information on the emotional state of the speaker, within this framework, a machine might use such properties of sound to recognize emotions. This work evaluated six different kinds of classifiers to predict six basic universal emotions from non-verbal features of human speech. The classification techniques used information from six audio files extracted from the eNTERFACE05 audio-visual emotion database. The information gain from a decision tree was also used in order to choose the most significant speech features, from a set of acoustic features commonly extracted in emotion analysis. The classifiers were evaluated with the proposed features and the features selected by the decision tree. With this feature selection could be observed that each one of compared classifiers increased the global accuracy and the recall. The best performance was obtained with Support Vector Machine and bayesNet.
Javier G. R´azuri, David Sundgren, Rahim Rahmani, Aron Larsson, Antonio Moran Cardenas and Isis Bonet, “Speech emotion recognition in emotional feedback for Human-Robot Interaction” International Journal of Advanced Research in Artificial Intelligence(IJARAI), 4(2), 2015. http://dx.doi.org/10.14569/IJARAI.2015.040204