dc.description.abstract | Deep learning has been applied in various domains and has superseded several traditional techniques in the field of image processing. Virtual try-on is an important sub-domain in this field. In 2D applications, Virtual try-on is often used in the online fitting of commercial garments, reducing the cost for consumers to visit physical locations. In 3D applications, a 3D human body model is generated and used for fitting, resulting in more stable outputs than 2D methods, but requiring complex preprocessing such as 3D scanning. In this paper, we propose an instrument performance virtual try-on system that allows users to change their clothing in instrument performance videos, suitable for sharing short video entertainment with other users without changing clothes. The system uses deep learning-based human body segmentation techniques, with SCHP and DensePose as the main body segmentation models, and transfers body and clothing parts to the Virtual try-on model. Since human body segmentation cannot represent occluded body parts, OpenPose is used as a body and hand skeleton system to supplement missing body information. HR-VITON serves as the main Virtual try-on model, using body segmentation and body skeleton information to generate reasonable fitting results. To improve the model′s generalization ability, HR-VITON introduces a hole digging technique for input images to achieve smoother boundary handling. However, this negatively affects the instrument restoration capability. In this study, we adjust the hole digging algorithm to maintain good instrument restoration effects, enabling users to create various changes based on existing performance videos. | en_US |