跟隨著網路和多媒體的科技迅速成長,我們人類平日生活的高解析度視訊重要性一日千里,最近市面已出現許多 4K 高解析度的視訊,可以看見未來高解析度視頻勢必會成為主流,而目前的最新視頻壓縮標準 H.265/HEVC 已經逐漸不夠能用,因此ITU-T VCEG 和 ISO/IEC MPEG 共同組成的JVET (Joint Video Exploration Team) 來制定新一代的視訊壓縮標準 H.266/FVC (Future Video Coding),從2015年開始討論,且預計2021年正式發佈變國際視訊壓縮標準。 H.266/FVC與H.265/HEVC相比不僅採用了QT(QuadTree)且還新增了BT(Binary Tree),不但將複雜的CU、PU和TU的組成元素除去,且能夠支援最大256x256到最小8x8的正方形區塊,以依據更多不同大小畫面的紋理特性編碼。QTBT的架構雖比QT提供更好的編碼效能,但其預測數量的增加造成在執行畫面編碼時間提高了5.6倍,於是針對畫面內編碼,如何發展增快CU編碼時間下的決策,這是非常重要的議題。 本論文結合近年十分熱門的人工智慧系統 (Artificial Intelligence, AI),提出基於空間特徵與摺積神經網路於 H.266/FVC視訊畫面內編碼快速分割決策。主要分為兩部分探討:首先在第一部份如何使用空間特徵來分析紋理特徵,以來決策是否切割與減少CNN使用次數;在第二部份則是針對預測模型的訓練和訓練資料的選擇來討論,再將訓練好的預測模型結合至 H.266/FVC 壓縮參考軟體當中來執行編碼。;With the rapid growth of Internet and multimedia technology, the high-resolution video of our daily lives is becoming important. Recently, there have been many 4K high-resolution videos on the market. It can be expected that high-resolution video will become the mainstream in the future. However, the video compression standard H.265/HEVC has gradually become insufficient. Therefore, the JVET (Joint Video Exploration Team) consist of ITU-T VCEG and ISO/IEC MPEG has developed a new generation of the video compression standard H.266/ FVC (Future Video Coding). JVET is expected to be officially released in 2021 to become an international video compression standard. H.266/FVC is different from H.265/HEVC, it uses QT (QuadTree), adds QTBT (QuadTree plus Binary Tree), and removes the complex elements of CU, PU, and TU. H.266/FVC codes according to the texture characteristics of more pictures of different sizes. Although the QTBT architecture provides better coding performance than QT. Therefore, for intra-picture coding, how to develop decision-making under CU coding time is very important. This thesis uses the Artificial Intelligence (AI) system, and proposes a fast partition prediction and decision-making method based on the convolutional neural network in the H.266/FVC video coding frame. The discussion is mainly divided into two parts: First, using spatial features to analyze texture features is to decide whether to cut and reduce the number of CNN usage. Secondly, discussing the training of the prediction model and the selection of training data, and then combine the trained prediction model with H.266/FVC to perform encoding.