影片識別為電腦科學研究的重要領域,延伸應用相當廣泛,輕量的模型與高準確度辨識模型是大家所追求的目標,先進的影片辨識模型作為多任務模型的骨幹能帶來顯著的辨識率提升,降低運算量同時使硬體端負擔減輕,因此開發高效的影片辨識模型應用於多任務模型的影片特徵提取器,讓眾多任務模型能增加準確率、處理速度、穩健性、輕量化;因此本篇論文致力於開發一個新穎的影像辨識模型能達到上述要求,讓其成為多任務影片特徵提取的骨幹。 本篇論文開發的新穎影片辨識模型,分層式卷積神經網路時空混合之動作識別模型(SCSM),新穎的壓縮分層時空融合方式,整體架構分為兩大部分,空間域與時間域,空間域透過連續的圖片特徵提取器,萃取出時空特徵,時間域將時空特徵加以混和,學習不同空間位置上的資訊與不同時間的特徵資訊,將時間與空間兩者的資訊混合後得出所需的訊息。 根據實驗結果,本篇論文所提出的SCSM架構相比當今的眾多的影片辨識模型中能在極低參數量與運算量的情況下仍擁有與SOTA模型匹敵的辨識率並且擁有強大的可擴展性與遷移學習能力。 ;Video recognition is an important field in computer science. A novel video recognition models can significantly improve accuracy as the backbone of multi-task models while reducing computational load, making hardware easier to manage. Therefore, developing an efficient video recognition backbone that can be applied to multi-task models to increase accuracy, processing speed, robustness, and lightweight. Ours aim to develop a novel action recognition model that is lightweight with high accuracy, making it the primary backbone of feature extraction on video for multi-task Learning. It named “Separable ConvNet Spatiotemporal Mixer’’ (SCSM). A new hierarchical spatial compression with spatiotemporal fusion method. The architecture consists of two parts: spatial domain and temporal domain. The spatial domain utilizes consecutive feature extractors to extract frame to feature, while the temporal domain fuses spatiotemporal features, learning information on different spatial positions and temporal feature information. According to experimental results, SCSM proposed in extremely low parameter and computational complexity compared to others so far, and has strong scalability and transfer learning. Achieving video recognition accuracy comparable to SOTA models.