對等距長方圖投影(equirectangular mapping projection, ERP)進行的行人追蹤時,因 ERP各區域不同程度的幾何失真,使多數現有追蹤器準確率降低。另外,360度視訊的高畫面率與高空間解析度導致高計算複雜度。因此,本論文提出採用雙流卷積神經網路 (two-flow convolutional neural network)為追蹤架構,且因不須於線上再訓練與更新神經網路參數,而可以高速對360度視訊進行追蹤,目前畫面的搜索視窗及目標模版之輸入,以卷積神經網路(convolutional neural network, CNN)各擷取階層式特徵,使卷積特徵兼具空間及多層特徵資訊。因應目標物於ERP影像不同區域的不均勻幾何失真,網路預測的邊界框(bounding, box)與目標模版的相似度為目標模板更新之標準。其中,相似度計算僅採用目標模版的強健特徵,以提升相似度量測的可靠性。此外,訓練採用的損失函數(loss function) 將依據預測座標狀態而採用L1與GIoU (generalized intersection over union, GIoU),透過採用GIoU loss降低神經網路對目標物大小之敏感度。實驗結果顯示本論文提出之方案,在目標有小幅度的縮放時,有著比SiamFC追蹤器更好的追蹤效果。;Non-uniform geometric distortions of the equirectangular projection (ERP) of 360-degree videos decreases tracking accuracy of most existing trackers. In addition, the high frame rate and spatial resolution of 360-degree videos cause high computational complexity. Hence, this thesis proposes a two-flow convolutional neural network that measures similarity of two inputs for pedestrian tracking on 360-degree videos. High-speed tracking is achieved since on-line re-training and update of the neural network model is not applied. Both the hierarchically spatial and convolutional features are extracted from the search window of the current frame and the target template to improve tracking accuracy. The tracker will update the target template by the similarity between the bounding box of the network prediction and the target template. In addi-tion, to improve the reliability of the similar measurement, the similarity calculation only uses the robust features of the target template. At the training stage, the loss function considers either the L1 loss or the generalized intersection over union (GIoU) according to the predicted location of the bounding box of the target. Experimental results show that the proposed scheme has a better tracking effect than the SiamFC tracker when the target has a small zoom.