Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.
Digital Object Identifier (DOI) : 10.14569/IJACSA.2017.081207
Article Published in International Journal of Advanced Computer Science and Applications(IJACSA), Volume 8 Issue 12, 2017.
Abstract: Transcribing dysarthric speech into text is still a challenging problem for the state-of-the-art techniques or commercially available speech recognition systems. Improving the accuracy of dysarthric speech recognition, this paper adopts Deep Belief Neural Networks (DBNs) to model the distribution of dysarthric speech signal. A continuous dysarthric speech recognition system is produced, in which the DBNs are used to predict the posterior probabilities of the states in Hidden Markov Models (HMM) and the Weighted Finite State Transducers framework was utilized to build the speech decoder. Experimental results show that the proposed method provides better prediction of the probability distribution of the spectral representation of dysarthric speech that outperforms the existing methods, e.g., GMM-HMM based dysarthric speech recogniztion approaches. To the best of our knowledge, this work is the first time to build a continuous speech recognition system for dysarthric speech with deep neural network technique, which is a promising approach for improving the communication between those individuals with speech impediments and normal speakers.
Jun Ren and Mingzhe Liu, “An Automatic Dysarthric Speech Recognition Approach using Deep Neural Networks ” International Journal of Advanced Computer Science and Applications(IJACSA), 8(12), 2017. http://dx.doi.org/10.14569/IJACSA.2017.081207