The Science and Information (SAI) Organization
  • Home
  • About Us
  • Journals
  • Conferences
  • Contact Us

Publication Links

  • IJACSA
  • Author Guidelines
  • Publication Policies
  • Outstanding Reviewers

IJACSA

  • About the Journal
  • Call for Papers
  • Editorial Board
  • Author Guidelines
  • Submit your Paper
  • Current Issue
  • Archives
  • Indexing
  • Fees/ APC
  • Reviewers
  • Apply as a Reviewer

IJARAI

  • About the Journal
  • Archives
  • Indexing & Archiving

Special Issues

  • Home
  • Archives
  • Proposals
  • ICONS_BA 2025

Computer Vision Conference (CVC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Computing Conference

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Intelligent Systems Conference (IntelliSys)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact

Future Technologies Conference (FTC)

  • Home
  • Call for Papers
  • Submit your Paper/Poster
  • Register
  • Venue
  • Contact
  • Home
  • Call for Papers
  • Editorial Board
  • Guidelines
  • Submit
  • Current Issue
  • Archives
  • Indexing
  • Fees
  • Reviewers
  • RSS Feed

DOI: 10.14569/IJACSA.2025.0161175
PDF

Leveraging Intelligent Speech Training to Elevate Phonetic Accuracy and Prosodic Fluency in English Learners

Author 1: Amit Khapekar
Author 2: Nidhi Mishra
Author 3: Vijaya Lakshmi Mandava
Author 4: T K Rama Krishna Rao
Author 5: Bhuvaneswari Pagidipati
Author 6: Prasad Devarasetty
Author 7: Elangovan Muniyandy

International Journal of Advanced Computer Science and Applications(IJACSA), Volume 16 Issue 11, 2025.

  • Abstract and Keywords
  • How to Cite this Article
  • {} BibTeX Source

Abstract: The successful teaching of pronunciation, as well as prosody, is the significant challenge that still remains to the English as Foreign Learning (EFL) students. Traditional pedagogical theories tend to focus on segmental phoneme accuracy but ignore suprasegmental components (stress or rhythm and intonation) which are natural and intelligible speech components. The currently available systems of computer-assisted pronunciation training (CAPT) are useful, but limited by the fact that they are based on limited acoustic models and incomplete coverage of prosodic characteristics, leading to less than optimal accuracy and limited pedagogical suitability. To overcome these shortcomings, the current paper proposes Attention-Guided Cross-Lingual Self-Supervised Learning (AG-CLSSL), a new model that is both able to combine phoneme-level representations of XLS-R (wav2vec2-large-xlsr-53) and prosodic representations of the pitch, energy, and duration through a Phoneme-Prosody Cross-Attention Fusion (PP-CAF) process. This conglomeration allows the joint and context specific representation of the speech that is further refined by the multi-task Transformer-based scoring model to jointly assess the accuracy of pronunciation, the consistency of the prosody and the general intelligibility. The framework is implemented in Python, with support of PyTorch and Hugging Face Transformers and is trained on an evaluated corpus of EFL learner speech (n=100) with a variety of L1 backgrounds, including Mandarin, Hindi, and Spanish. Experimental assessments indicate significant performance improvement with 55.4% decrease in Phoneme Error rate, 52.0 percent decrease in Word Error rate, 43.3 percent increase in Stress Placement Accuracy and 34.9 percent increase in Pitch Alignment Score. The total acoustic similarity to native speech went up by 36.1, which demonstrates the ability of AG-CLSSL to progress articulatory accuracy as well as the naturalness of prosody and provide interpretable and attention-directed information on scalable AI-based pronunciation and prosody training.

Keywords: Automatic speech recognition; pronunciation and prosody; transformer-based phoneme identification; prosody assessment; adaptive learning algorithm

Amit Khapekar, Nidhi Mishra, Vijaya Lakshmi Mandava, T K Rama Krishna Rao, Bhuvaneswari Pagidipati, Prasad Devarasetty and Elangovan Muniyandy. “Leveraging Intelligent Speech Training to Elevate Phonetic Accuracy and Prosodic Fluency in English Learners”. International Journal of Advanced Computer Science and Applications (IJACSA) 16.11 (2025). http://dx.doi.org/10.14569/IJACSA.2025.0161175

@article{Khapekar2025,
title = {Leveraging Intelligent Speech Training to Elevate Phonetic Accuracy and Prosodic Fluency in English Learners},
journal = {International Journal of Advanced Computer Science and Applications},
doi = {10.14569/IJACSA.2025.0161175},
url = {http://dx.doi.org/10.14569/IJACSA.2025.0161175},
year = {2025},
publisher = {The Science and Information Organization},
volume = {16},
number = {11},
author = {Amit Khapekar and Nidhi Mishra and Vijaya Lakshmi Mandava and T K Rama Krishna Rao and Bhuvaneswari Pagidipati and Prasad Devarasetty and Elangovan Muniyandy}
}



Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.

IJACSA

Upcoming Conferences

Computer Vision Conference (CVC) 2026

21-22 May 2026

  • Amsterdam, The Netherlands

Computing Conference 2026

9-10 July 2026

  • London, United Kingdom

Artificial Intelligence Conference 2026

3-4 September 2026

  • Amsterdam, The Netherlands

Future Technologies Conference (FTC) 2026

15-16 October 2026

  • Berlin, Germany
The Science and Information (SAI) Organization
BACK TO TOP

Computer Science Journal

  • About the Journal
  • Call for Papers
  • Submit Paper
  • Indexing

Our Conferences

  • Computer Vision Conference
  • Computing Conference
  • Intelligent Systems Conference
  • Future Technologies Conference

Help & Support

  • Contact Us
  • About Us
  • Terms and Conditions
  • Privacy Policy

The Science and Information (SAI) Organization Limited is a company registered in England and Wales under Company Number 8933205.