The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Digital Archiving Policy
  • Promote your Publication
  • Metadata Harvesting (OAI2)

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • Guest Editors
  • SUSAI-EE 2025
  • ICONS-BA 2025
  • IoT-BLOCK 2025

Future of Information and Communication Conference (FICC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • Subscribe

DOI: 10.14569/IJACSA.2025.0160531
PDF

A Deep Learning Model for Speech Emotion Recognition on RAVDESS Dataset

Author 1: Zhongliang Wei
Author 2: Chang Ge
Author 3: Chang Su
Author 4: Ruofan Chen
Author 5: Jing Sun

International Journal of Advanced Computer Science and Applications(IJACSA), Volume 16 Issue 5, 2025.

  • Abstract and Keywords
  • How to Cite this Article
  • {} BibTeX Source

Abstract: Speech Emotion Recognition (SER), a pivotal area in artificial intelligence, is dedicated to analyzing and interpreting emotional information in human speech. To address the challenges of capturing both local acoustic features and long-range dependencies in emotional speech, this study proposes a novel parallel neural network architecture that integrates Convolutional Neural Networks (CNNs) and Transformer encoders. To integrate the distinct feature representations captured by the two branches, a cross-attention mechanism is employed for feature-level fusion, enabling deep-level semantic interaction and enhancing the model’s emotion discrimination capacity. To improve model generalization and robustness, a systematic preprocessing pipeline is constructed, including signal normalization, data segmentation, additive white Gaussian noise (AWGN) augmentation with varying SNR levels, and Mel spectrogram feature extraction. A grid search strategy is adopted to optimize key hyperparameters such as learning rate, dropout rate, and batch size. Extensive experiments conducted on the RAVDESS dataset, consisting of eight emotional categories, demonstrate that our model achieves an overall accuracy of 80.00%, surpassing existing methods such as CNN-based (71.61%), multilingual CNN (77.60%), bimodal LSTM-attention (65.42%), and unsupervised feature learning (69.06%) models. Further analyses reveal its robustness across different gender groups and emotional intensities. Such outcomes highlight the architectural soundness of our model and underscore its potential to inform subsequent developments in affective speech processing.

Keywords: Speech emotion recognition; deep learning; RAVDESS dataset; multi-feature fusion

Zhongliang Wei, Chang Ge, Chang Su, Ruofan Chen and Jing Sun, “A Deep Learning Model for Speech Emotion Recognition on RAVDESS Dataset” International Journal of Advanced Computer Science and Applications(IJACSA), 16(5), 2025. http://dx.doi.org/10.14569/IJACSA.2025.0160531

@article{Wei2025,
title = {A Deep Learning Model for Speech Emotion Recognition on RAVDESS Dataset},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2025.0160531},
url = {http://dx.doi.org/10.14569/IJACSA.2025.0160531},
year = {2025},
publisher = {The Science and Information Organization},
volume = {16},
number = {5},
author = {Zhongliang Wei and Chang Ge and Chang Su and Ruofan Chen and Jing Sun}
}



Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

IJACSA

Upcoming Conferences

Computer Vision Conference (CVC) 2026

16-17 April 2026

  • Berlin, Germany

Healthcare Conference 2026

21-22 May 2026

  • Amsterdam, The Netherlands

Computing Conference 2025

19-20 June 2025

  • London, United Kingdom

IntelliSys 2025

28-29 August 2025

  • Amsterdam, The Netherlands

Future Technologies Conference (FTC) 2025

6-7 November 2025

  • Munich, Germany
The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference
  • Communication Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

© The Science and Information (SAI) Organization Limited. All rights reserved. Registered in England and Wales. Company Number 8933205. thesai.org