dc.description.abstract | In recent years, with the rise of deep learning technology, deep learning has demonstrated excellent performance in image recognition, recommendation systems, visual applications, natural language processing, and autonomous driving. In the field of intersection of pattern recognition and visual applications, image inpainting is developed by combining image editing and image generation. Image inpainting refers to repairing or reconstructing image-missing part according to background or surrounding image. In practical applications, it not only repairs old images, but is also used to remove unwanted or unnecessary objects on an image such as blink repairing, or identity confusion. In order to achieve the repair goal, image inpainting can be divided into three stages : first, distribution and composition of objects in and image need to be disassembled; second, specific objects are deleted, which is called editing stage; Finally, content is generated for the missing or deleted part.
In this paper, we propose an image editing system. Pixel’s position of an object on label map and instance map can be obtained by the coordinate of a mouse cursor, and obtained position of an area which is covered by the object as basis of range for deletion. In the generation stage, Deepfill is used as a basic network architecture and combines with contextual attention by extracting a patch of adjacent background areas to estimate color value of repaired areas. However, the estimated color value directly input as image to image translation of pixel2pixelHD architecture, it is easy to cause a broken object or even a lot of noise. Therefore, in order to solve this problem, we conduct a color mapping procedure on estimated color values, and integrate instance segmentation and labeling. The result of generated image can be more complete and reasonable. | en_US |