若有 RGB 視頻串流,我們的目標是正確識別與連續手語識別 (CSLR) 相關的手語。儘管 該領域提出的深度學習方法逐漸增加,但大多數主要集中在僅使用 RGB 特徵,無論是全 幀圖像還是手部和臉部的細節。 CSLR 訓練過程信息的不足嚴重限制了他們學習視頻輸入 幀中多個特徵的能力。目前,多特徵網路變得相當普遍,因為當前的計算能力不再限制我 們擴大網路規模。因此,在本文中,我們將研究深度學習網路並應用多特徵技術,以期增 加和改進當前的連續手語識別任務,詳細說明我們將包括的另一個特徵在這項研究中,如 果我們將它們做比較,關鍵點特徵沒有圖像特徵那麼沉重。這項研究的結果表明,在 Phoenix2014 和中國手語這兩個最流行的 CSLR 數據集上,添加關鍵點特徵作為一種多特 徵模態可以提高識別率,或者通常會降低單詞錯誤率 (WER)。 ;Given the RGB video streams, we aim to recognize signs related to continuous sign language recognition (CSLR) correctly. Despite there are increasing of proposed deep learning methods in this area, most of them mainly focus on only using an RGB feature, either the full frame image or the detail of hands and face. The scarcity of information for the CSLR training process heavily constrains their capability to learn the multiple features within the video input frames. Currently, Multi-feature networks became something quite common since the current computing power is something that is not limiting us from scaling the network size anymore. Thus, in this thesis, we’re going to work deep learning network and apply a multi-feature technique with the hope to increase & improve the current state of the art of continuous sign language recognition tasks, in detail another feature that we would include in this research is the key-point feature which is not as heavy as the image feature if we are comparing them. The result of this research shows that adding a key-point feature as a multi-feature modality could increase the recognition rate or commonly, decrease the word error rate (WER) on the two most popular CSLR datasets: Phoernix2014 and Chinese Sign Language.