在深度學習電腦視覺領域裡,物件偵測任務一直是廣泛受到討論與重視的研究領域,在現實世界中存在廣大的應用場景,使其成為不可撼動的重要研究項目。相關的模型方法也不斷推陳出新,不管是基於卷積神經網路或是基於 Transformer 架構的模型都在持續發展中,但圖神經網路在這方面的應用卻不多,尤其是二維影像的應用,使我們想探討圖神經網路在二維影像物件偵測任務上的可能性。 圖神經網路近期逐漸受到重視,歸功於其對圖資料結構良好的表示能力,使其能夠 探索不規則鄰居節點的關係。先前有部分研究工作將圖神經網路架設在卷積神經網路上,並探索此方法帶來的性能提升,但其存在實驗比較對象不夠適當、其圖結構的鄰居與邊是依據固定空間範圍建立,如此可能使感受野及探索能力受到限制,甚至可能降低圖神經網路探索能力等問題。 我們針對上述問題提出了可模組化的動態注意力圖神經網路區塊(Dynamic Graph Attention Blocks),引入可變卷積來增加圖神經網路的探索能力,使其建邊的方式由固定改為動態建立,讓模型可以自己學習找到更好的特徵做卷積,同時將模組架設在 state-of-the-art 物件偵測器上進行實驗。經由實驗顯示我們的方法可以得到匹配或稍加的表現。;In the realm of deep learning for computer vision, the object detection tasks have always been a widely discussed and emphasized research area. There are many application scenarios in the real world, making it an unshakable research area. Various models are constantly being evolved, whether based on convolutional neural networks or Transformer architectures, both of which are in continuous development. However, the application of graph neural networks in this area is not common, especially in the case of 2D images, which prompts us explore the potential of graph neural networks in 2D image object detection tasks. Graph neural networks have recently gained attention due to their strong representation ability for graph data structures, enabling them to explore the relationship of irregular neighborhood nodes. Some previous research efforts have combined graph neural networks with convolutional neural networks and explored the performance improvements brought by this method. But the experimental comparisons are not suitable enough, and the neighbors and edges of the graph structure are established based on a fixed spatial range, which may limit the receptive field and exploration capabilities, and may even reduce the exploration ability of graph neural networks. To address these issues, we propose a modular Dynamic Graph Attention Blocks, which introduces deformable convolutions to enhance the exploration capabilities of graph neural networks. This change allows the edges to be dynamically established rather than fixed, enabling the model to learn to find better features for convolution. Simultaneously, we integrate the module into state-of-the-art object detector for experiments. Our experiments show that our method can achieve comparable or slightly improved performance.