隨著影音媒體需求的不斷增長,超解析度領域的重要性日 益提升。特別是Transformer模型因其卓越的性能而在電腦視 覺方面受到廣泛關注,導致越來越多的研究將其應用於這一領 域。然而,我們發現儘管Transformer通過增加不同機制的注 意力能夠解決學習特徵有限的問題,但是在訓練過程中仍可能 遺失一些紋理和結構。為了盡可能地保留初始特徵和架構,我 們提出了一種方式用以整合Residual Connection、Attention Mechanism和Upscaling Technique。為了驗證我們方法的性能, 我們在五個不同的資料集上進行了多次實驗,並且與現有的先 進超解析度模型進行了比較。實驗結果顯示,我們的方法在性 能上相較於當前領域中最先進的模型有著更佳的表現。;As the demand for audio-visual media continues to grow, the significance of the super-resolution field is increasingly recognized. In particular, Transformer models have garnered widespread attention in the realm of computer vision due to their exceptional performance, leading to their growing application in this area. However, we observed that despite the ability of Transformer to address the issue of limited feature learning through various attention mechanisms, some textures and structures may be lost during the learning process. To maximally preserve the initial features and structures, we propose a system, named Integrated Attention Transformer (IAT), that integrates Residual Connection, Attention Mechanism, and Upscaling Technique. To confirm the efficacy of IAT, experiments were conducted on five different datasets, compared with the current advanced super-resolution state-of-the-art (SOTA) model. The results show that the proposed IAT surpasses the current SOTA model.