Copyright Statement: This is an open access article licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, even commercially as long as the original work is properly cited.
Digital Object Identifier (DOI) : 10.14569/IJACSA.2012.030608
Article Published in International Journal of Advanced Computer Science and Applications(IJACSA), Volume 3 Issue 6, 2012.
Abstract: This paper summarizes various algorithms used to design a sign language recognition system. Sign language is the language used by deaf people to communicate among themselves and with normal people. We designed a real time sign language recognition system that can recognize gestures of sign language from videos under complex backgrounds. Segmenting and tracking of non-rigid hands and head of the signer in sign language videos is achieved by using active contour models. Active contour energy minimization is done using signers hand and head skin colour, texture, boundary and shape information. Classification of signs is done by an artificial neural network using error back propagation algorithm. Each sign in the video is converted into a voice and text command. The system has been implemented successfully for 351 signs of Indian Sign Language under different possible video environments. The recognition rates are calculated for different video environments.
P V.V Kishore and P.Rajesh Kumar, “Segment, Track, Extract, Recognize and Convert Sign Language Videos to Voice/Text” International Journal of Advanced Computer Science and Applications(IJACSA), 3(6), 2012. http://dx.doi.org/10.14569/IJACSA.2012.030608