What is PeopleCap?
Accurately tracking, reconstructing, capturing and animating the human body in 3D is critical for human-computer interaction, games, special effects and virtual reality. In the past, this has required extensive manual animation.
Nowadays research in this area is allowing us to capture and learn realistic models of people and hands from real measurements coming from scans, depth cameras, color cameras and inertial sensors. Such a model is, ultimately, a compact parameterization of surface geometry that can be deformed to generalize to novel poses and shapes. This model can then be used to track bodies and hands from noisy sensors by optimizing the model parameters as to best fit noisy and incomplete image observations.
The workshop is intended to offer a meeting and discussion platform for researchers with diverse backgrounds, such as computer graphics, computer vision and optimization, and machine learning. This will hopefully push the state-of-the-art in “Capturing and modeling humans” in terms of models, methods and datasets.
The call for papers will be in the areas of
- 3D Human pose and shape estimation from images, depth cameras or inertial sensors
- 3D Hand pose estimation and tracking
- Human body, hand and face modeling
- 3D/4D Performance capture of bodies, faces and hands
- Capture of people and clothing
- Human body and hand models
- Models of human soft-tissue
- Registration of bodies, hands and faces
While the computer vision community has seen a lot of work on methods for detecting and tracking people in 2D much less work has focused in reasoning directly in 3D. Hence, in PeopleCap, special emphasis will be given to methods that work in 3D and to methods that use a generative model.
Submission
Submission Deadline |
August 8 |
Reviews Due |
August 16 |
Notification of Acceptance |
August 18 |
Camera-Ready Submission |
August 23 |
Workshop |
October 23 |
- All deadlines are 5 PM Pacific time.
- Paper submissions should follow the exact same guidelines of ICCV, 6-8 pages plus references, LaTeX template can be downloaded from here.
- Submissions can be uploaded to the CMT: https://cmt3.research.microsoft.com/PEOPLECAP2017
- Paper submissions should follow the exact same guidelines of ICCV, 6-8 pages plus references, LaTeX template can be downloaded from here.
- Submissions can be uploaded to the CMT: https://cmt3.research.microsoft.com/PEOPLECAP2017
- If you do not have one already, create an account for cmt3 and login.
- In case you are not directed to PeopleCap submission, type PeopleCap in the search box to find it.
- Create a new submission and upload the main paper (and supplementary material if any).
Invited Speakers
Program
13:30 |
Welcome and introduction |
13:40 |
Learning Digital Humans by Capturing Real Ones Michael Black |
14:20 |
4D Modeling at INRIA Edmond Boyer |
15:00 |
VarPro, Lifting, and all that Andrew Fitzgibbon |
15:40 |
Poster session and coffee break |
16:40 |
Capturing Reality Using Real-Time Optimization Michael Zollhöfer |
17:10 |
Sphere-Meshes for Real-Time Hand Modeling and Tracking Anastasia Tkach and Andrea Tagliasacchi |
17:40 |
Closing remarks and best paper announcement |
Organizers
Gerard Pons-MollResearch Group Leader, MPI for Informatics |
Jonathan TaylorSenior Scientist, perceptiveIO |
Papers
Realtime Dynamic 3D Facial Reconstruction for Monocular Video In-the-Wild
Shuang Liu*; Zhao Wang; Xiaosong Yang; Jian.J Zhang
Symmetry-factored Statistical Modelling of Craniofacial Shape
Hang Dai*; William Smith; Nick Pears; Christian Duncan
4D Model-based Spatiotemporal Alignment of Scripted Taiji Quan Sequences
Jesse Scott*; Robert Collins; Christopher Funk; Yanxi Liu
Generating Multiple Diverse Hypotheses for Human 3D Pose Consistent with 2D Joint Detections
Ehsan Jahangiri*; Alan Yuille
Efficient Separation between Projected Patterns for Multiple Projector 3D People Scanning
Tomislav Petkovic*; Tomislav Pribanic; Matea Donlic; Peter Sturm
A Biophysical 3D Morphable Model of Face Appearance
Sarah Alotaibi; William Smith*
Towards Implicit Correspondence in Signed Distance Field Evolution
Miroslava Slavcheva*; Maximilian Baust; Slobodan Ilic (PeopleCap Best Paper Award)