博碩士論文 107522092 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator楊舜丞zh_TW
DC.creatorShun-Cheng Yangen_US
dc.date.accessioned2020-7-30T07:39:07Z
dc.date.available2020-7-30T07:39:07Z
dc.date.issued2020
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=107522092
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract近年來,隨著深度學習技術的崛起,無論影像辨識、推薦系統、視覺應用、自然語言處理與自動駕駛等領域上深度學習都展現了優異的效能。在圖形識別與視覺應用交集的領域,特別是在圖像編輯(image editing)和圖像生成(image generation)技術都蓬勃發展,圖像修復(image inpainting)則是合併兩者技術下發展出的應用,修復指的是將圖像缺陷的部分(missing region)依照背景或周圍的圖像,針對圖像損失的部分進行修復或是重建(reconstruction)。在應用面上,除了修復舊圖像之外,也可以運用在眨眼修復、身分混淆或為清除圖像上不想或不需要的物件後補上合理的內容等應用範疇。以清除圖像上物件後再進行修補為例,可拆分成多個階段進行處理:首先需拆解圖像內物件的分佈組成;其次,針對特定物件進行刪除,也就是所謂的編輯階段,最後則是針對缺失即已刪除的部分進行內容生成。 因此,本論文提出了一個圖像編輯系統,藉由滑鼠的座標位置可以得知物件在label map以及instance map對應的像素值,取得物件在圖上所涵蓋的區域位置,以此作為物件刪除範圍的基準,而在生成階段則以Deepfill為基礎網路架構,結合內容感知網路(contextual attention)透過提取一塊patch鄰近的背景區域,推估出修復區域的顏色數值,但若直接將推估出的顏色數值作為pix2pixHD架構的圖像轉換(image to image translation)輸入,則容易導致生成破碎物件或甚至不成物體只有一堆雜訊。所以,為了解決這個問題,我們會先針對推估出的顏色數值執行顏色對映程序,進行實例分割標註的整合,在影像生成出的結果可以是更完整且合理的內容。zh_TW
dc.description.abstractIn recent years, with the rise of deep learning technology, deep learning has demonstrated excellent performance in image recognition, recommendation systems, visual applications, natural language processing, and autonomous driving. In the field of intersection of pattern recognition and visual applications, image inpainting is developed by combining image editing and image generation. Image inpainting refers to repairing or reconstructing image-missing part according to background or surrounding image. In practical applications, it not only repairs old images, but is also used to remove unwanted or unnecessary objects on an image such as blink repairing, or identity confusion. In order to achieve the repair goal, image inpainting can be divided into three stages : first, distribution and composition of objects in and image need to be disassembled; second, specific objects are deleted, which is called editing stage; Finally, content is generated for the missing or deleted part. In this paper, we propose an image editing system. Pixel’s position of an object on label map and instance map can be obtained by the coordinate of a mouse cursor, and obtained position of an area which is covered by the object as basis of range for deletion. In the generation stage, Deepfill is used as a basic network architecture and combines with contextual attention by extracting a patch of adjacent background areas to estimate color value of repaired areas. However, the estimated color value directly input as image to image translation of pixel2pixelHD architecture, it is easy to cause a broken object or even a lot of noise. Therefore, in order to solve this problem, we conduct a color mapping procedure on estimated color values, and integrate instance segmentation and labeling. The result of generated image can be more complete and reasonable.en_US
DC.subject卷積神經網路zh_TW
DC.subject圖像轉換zh_TW
DC.subject圖像修復zh_TW
DC.subjectconvolutional neural networken_US
DC.subjectimage-to-image translationen_US
DC.subjectimage inpaintingen_US
DC.title基於內容感知與語義分割圖的圖像轉換用於修復圖像zh_TW
dc.language.isozh-TWzh-TW
DC.titleInpainting image with image to image translation based on contextual attention and semantic segmentation mapen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明