Abstract
Knowledge about individual body shape has numerous applications in various domains such as healthcare, fashion and personalized entertainment. Most of the depth based whole body scanners need multiple cameras surrounding the user and requiring the user to keep a canonical pose strictly during capturing depth images. These scanning devices are expensive and need professional knowledge for operation. In order to make 3D scanning as easy-to-use and fast as possible, there is a great demand to simplify the process and to reduce the hardware requirements. In this paper, we propose a deep learning algorithm, dubbed 3DBodyNet, to rapidly reconstruct the 3D shape of human bodies using a single commodity depth camera. As easy-to-use as taking a photo using a mobile phone, our algorithm only needs two depth images of the front-facing and back-facing bodies. The proposed algorithm has strong operability since it is insensitive to the pose and the pose variations between the two depth images. It can also reconstruct an accurate body shape for users under tight/loose clothing. Another advantage of our method is the ability to generate an animatable human body model. Extensive experimental results show that the proposed method enables robust and easy-to-use animatable human body reconstruction, and outperforms the state-of-the-art methods with respect to running time and accuracy.
Original language | English |
---|---|
Pages (from-to) | 2139-2149 |
Number of pages | 11 |
Journal | IEEE Transactions on Multimedia |
Volume | 24 |
DOIs | |
Publication status | Published - 28 Apr 2021 |
Externally published | Yes |
Keywords
- Human body shape
- Body shape under clothing
- depth camera
- 3D Scanning
- deep learning on point clouds