從美國手語詞典視頻資料集 (ASLLVD) 中選出來 36 個單字作 為我們的資料集來測試所提之手語辨識演算法之有效性,我們在資料 集上達到了 78.57% 的辨識準確率。;The recognition of sign language can bene?t many dumb deaf people and bridge the gap of communication between them and their families and friends. For many years, deep learning has achieved great results in the ?eld of sign language recognition. There are lots of methods for extracting features of hand shapes or signs. These di?erent features are used as input of deep neural networks (DNN) in many studies for sign language recognition. However, the e?ciency of feature extraction and the recognition accuracy still have room for improvement. In this study, we proposed a novel algorithm for sign language recognition. The algorithm ?rst uses a hierarchical self-organizing map (SOM) to covert dynamic sign language into a static response map. Since the convolutional neural network (CNN) has an extraordinary performance in image classi?cation, we take the static response maps as input features to CNN to achieve the purpose of sign language recognition.
We selected 36 signs from the American sign language lexicon video dataset (ASLLVD) as our dataset to test the e?ectiveness of our proposed algorithm. Finally, We reached a recognition accuracy of 78.57% on the dataset.