Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.
Digital Object Identifier (DOI) : 10.14569/IJACSA.2015.061119
Article Published in International Journal of Advanced Computer Science and Applications(IJACSA), Volume 6 Issue 11, 2015.
Abstract: Recognizing human emotions through vocal channel has gained increased attention recently. In this paper, we study how used features, and classifiers impact recognition accuracy of emotions present in speech. Four emotional states are considered for classification of emotions from speech in this work. For this aim, features are extracted from audio characteristics of emotional speech using Linear Frequency Cepstral Coefficients (LFCC) and Mel-Frequency Cepstral Coefficients (MFCC). Further, these features are classified using Hidden Markov Model (HMM) and Support Vector Machine (SVM).
Farah Chenchah and Zied Lachiri, “Acoustic Emotion Recognition Using Linear and Nonlinear Cepstral Coefficients” International Journal of Advanced Computer Science and Applications(IJACSA), 6(11), 2015. http://dx.doi.org/10.14569/IJACSA.2015.061119