Clothed Human Performance Capture with a Double-layer Neural Radiance Fields


Kangkan Wang*1,2, Guofeng Zhang3, Suxu Cong1, Jian Yang1,2
1Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education,
2Jiangsu Key Lab of Image and Video Understanding for Social Security,School of Computer Science and Engineering,
Nanjing University of Science and Technology, China

3State Key Laboratory of CAD&CG, Zhejiang University, China

Paper Code

Abstract


This paper addresses the challenge of capturing performance for the clothed humans from sparse-view or monocular videos. Previous methods capture the performance of full humans with a personalized template or recover the garments from a single frame with static human poses. However, it is inconvenient to extract cloth semantics and capture clothing motion with one-piece template, while single frame-based methods may suffer from instable tracking across videos. To address these problems, we propose a novel method for human performance capture by tracking clothing and human body motion separately with a doublelayer neural radiance fields (NeRFs). Specifically, we propose a double-layer NeRFs for the body and garments, and track the densely deforming template of the clothing and body by jointly optimizing the deformation fields and the canonical double-layer NeRFs. In the optimization, we introduce a physics-aware cloth simulation network which can help generate physically plausible cloth dynamics and body-cloth interactions. Compared with existing methods, our method is fully differentiable and can capture both the body and clothing motion robustly from dynamic videos. Also, our method represents the clothing with an independent NeRFs, allowing us to model implicit fields of general clothes feasibly. The experimental evaluations validate its effectiveness on real multi-view or monocular videos.

Results on Different Datasets


S4 from DeepCap Dataset(Left); "FranziRed" from DynaCap Dataset(Middle); S1 from DeepCap Dataset(Right)

Supplementary Video


Application of Cloth Retargeting


Retargeting the clothing between two people

Citation


If you find this code useful for your research, please use the following BibTeX entry.

          
@inproceedings{Wang2023ClothedHumanCap,
  author = {Kangkan Wang*, Guofeng Zhang, Suxu Cong, Jian Yang}, 
  title = {Clothed Human Performance Capture with a Double-layer Neural Radiance Fields, 
  booktitle = {Computer Vision and Pattern Recognition (CVPR)},
  year = {2023},
  }