手語是一種視覺語言,利用手形、動作,甚至面部表情傳達訊息以作為聽障人 士主要的溝通工具。以深度學習技術進行手語辨識在近年來受到矚目,然而神經網 路訓練資料需仰賴大量手語視訊,其製作過程頗費時繁瑣。本研究提出利用單一手 語視訊建構深度學習訓練資料的方法,實現在視訊畫面中辨識台灣手語詞彙。 首先,我們由視訊共享平台中取得一系列手語教學視訊,透過Mask RCNN[1] 找出所有教學畫面中的手部和面部分割遮罩,再透過空間域數據增強來創建更多不 同內容的訓練集。我們也採用不同的時間域採樣策略,模擬不同手譯員的速度。最 後我們以具注意力機制的3D-ResNet 對多種台灣手語辭彙進行分類,實驗結果顯 示,我們所產生的合成資料集能在手語辭彙辨識上帶來幫助。;Sign languages (SL) are visual languages that use shapes of hands, movements, and even facial expressions to convey information, acting as the primary communication tool for hearing-impaired people. Sign language recognition (SLR) based on deep learning technologies has attracted much attention in recent years. Nevertheless, training neural networks requires a massive number of SL videos. Their preparation process is time-consuming and cumbersome. This research proposes using a set of SL videos to build effective training data for the classification of Taiwanese Sign Language (TSL) vocabulary. First, we begin with a series of TSL teaching videos from the video-sharing platform. Then, Mask RCNN[1] is employed to extract the segmentation masks of hands and faces in all video frames. Next, spatial domain data augmentation is applied to create the training set with different contents. Varying temporal domain sampling strategies are also employed to simulate the speeds of different signers. Finally, the attention-based 3D-ResNet trained by the synthetic dataset is used to classify a variety of TSL vocabulary. The experimental results show the promising performance and the feasibility to SLR.