Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.
Digital Object Identifier (DOI) : 10.14569/IJACSA.2014.050204
Article Published in International Journal of Advanced Computer Science and Applications(IJACSA), Volume 5 Issue 2, 2014.
Abstract: This paper presents an approach to emotion recognition from speech signals and textual content. In the analysis of speech signals, thirty-seven acoustic features are extracted from the speech input. Two different classifiers Support Vector Machines (SVMs) and BP neural network are adopted to classify the emotional states. In text analysis, we use the two-step classification method to recognize the emotional states. The final emotional state is determined based on the emotion outputs from the acoustic and textual analyses. In this paper we have two parallel classifiers for acoustic information and two serial classifiers for textual information, and a final decision is made by combing these classifiers in decision level fusion. Experimental results show that the emotion recognition accuracy of the integrated system is better than that of either of the two individual approaches.
Weilin Ye and Xinghua Fan, “Bimodal Emotion Recognition from Speech and Text” International Journal of Advanced Computer Science and Applications(IJACSA), 5(2), 2014. http://dx.doi.org/10.14569/IJACSA.2014.050204