dc.description.abstract | In recent years, as the impact of deep learning on people′s lives has grown, more and more attention has been paid to the development of this field. The task of human hand pose and shape estimation from a rgb image has been a long-standing problem in the field of computer vision. Unlike common hand posture prediction, which only predicts the coordinates of the skeletal points of the hand, this task restores the original shape of the hand. Many places will apply this task such as augmented reality and virtual reality, but the task is still very challenging because the hand occupies a relatively small part of the image area, and the hand movements are flexible and easy to block.
In this paper we propose a complete end-to-end network architecture to obtain 3D mesh hand shapes from rgb hand images. Specifically, in the encoder part, we use ResNet-50 to extract the image features, and for better regression of model parameters later, we obtain some 2D feature maps, such as 2D heatmap and mask images, through some convolutional layers. In the model parameter regression part, we use the fully connected layer for iterative regression of the model parameters. Because the hand models generated by the model-base method have some defects, such as not natural enough. So finally we add the hand mesh coordinate correction part, we treat the hand model (MANO) generated by the model as the rough initial hand model, then add some features from the previous network, enter the graphical convolutional network layer to regress the offset of each coordinate point, and finally add it to the initial hand model to get the final hand model.
| en_US |