English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41625457      線上人數 : 1972
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/84090


    題名: 基於內容感知與語義分割圖的圖像轉換用於修復圖像;Inpainting image with image to image translation based on contextual attention and semantic segmentation map
    作者: 楊舜丞;Yang, Shun-Cheng
    貢獻者: 資訊工程學系
    關鍵詞: 卷積神經網路;圖像轉換;圖像修復;convolutional neural network;image-to-image translation;image inpainting
    日期: 2020-07-30
    上傳時間: 2020-09-02 18:03:24 (UTC+8)
    出版者: 國立中央大學
    摘要: 近年來,隨著深度學習技術的崛起,無論影像辨識、推薦系統、視覺應用、自然語言處理與自動駕駛等領域上深度學習都展現了優異的效能。在圖形識別與視覺應用交集的領域,特別是在圖像編輯(image editing)和圖像生成(image generation)技術都蓬勃發展,圖像修復(image inpainting)則是合併兩者技術下發展出的應用,修復指的是將圖像缺陷的部分(missing region)依照背景或周圍的圖像,針對圖像損失的部分進行修復或是重建(reconstruction)。在應用面上,除了修復舊圖像之外,也可以運用在眨眼修復、身分混淆或為清除圖像上不想或不需要的物件後補上合理的內容等應用範疇。以清除圖像上物件後再進行修補為例,可拆分成多個階段進行處理:首先需拆解圖像內物件的分佈組成;其次,針對特定物件進行刪除,也就是所謂的編輯階段,最後則是針對缺失即已刪除的部分進行內容生成。
    因此,本論文提出了一個圖像編輯系統,藉由滑鼠的座標位置可以得知物件在label map以及instance map對應的像素值,取得物件在圖上所涵蓋的區域位置,以此作為物件刪除範圍的基準,而在生成階段則以Deepfill為基礎網路架構,結合內容感知網路(contextual attention)透過提取一塊patch鄰近的背景區域,推估出修復區域的顏色數值,但若直接將推估出的顏色數值作為pix2pixHD架構的圖像轉換(image to image translation)輸入,則容易導致生成破碎物件或甚至不成物體只有一堆雜訊。所以,為了解決這個問題,我們會先針對推估出的顏色數值執行顏色對映程序,進行實例分割標註的整合,在影像生成出的結果可以是更完整且合理的內容。;In recent years, with the rise of deep learning technology, deep learning has demonstrated excellent performance in image recognition, recommendation systems, visual applications, natural language processing, and autonomous driving. In the field of intersection of pattern recognition and visual applications, image inpainting is developed by combining image editing and image generation. Image inpainting refers to repairing or reconstructing image-missing part according to background or surrounding image. In practical applications, it not only repairs old images, but is also used to remove unwanted or unnecessary objects on an image such as blink repairing, or identity confusion. In order to achieve the repair goal, image inpainting can be divided into three stages : first, distribution and composition of objects in and image need to be disassembled; second, specific objects are deleted, which is called editing stage; Finally, content is generated for the missing or deleted part.
    In this paper, we propose an image editing system. Pixel’s position of an object on label map and instance map can be obtained by the coordinate of a mouse cursor, and obtained position of an area which is covered by the object as basis of range for deletion. In the generation stage, Deepfill is used as a basic network architecture and combines with contextual attention by extracting a patch of adjacent background areas to estimate color value of repaired areas. However, the estimated color value directly input as image to image translation of pixel2pixelHD architecture, it is easy to cause a broken object or even a lot of noise. Therefore, in order to solve this problem, we conduct a color mapping procedure on estimated color values, and integrate instance segmentation and labeling. The result of generated image can be more complete and reasonable.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML96檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明