PeopleCap 2017

ICCV 2017 Workshop
October 23, 13pm
Submission
Invited speakers
Program
Organizers
PeopleCAP 2017 - ICCV 2017
  • Home

What is PeopleCap?


Accurately tracking, reconstructing, capturing and animating the human body in 3D is critical for human-computer interaction, games, special effects and virtual reality. In the past, this has required extensive manual animation.
 
Nowadays research in this area is allowing us to capture and learn realistic models of people and hands from real measurements coming from scans, depth cameras, color cameras and inertial sensors.  Such a model is, ultimately, a compact parameterization of surface geometry that can be deformed to generalize to novel poses and shapes. This model can then be used to track bodies and hands from noisy sensors by optimizing the model parameters as to best fit noisy and incomplete image observations.

The workshop is intended to offer a meeting and discussion platform for researchers with diverse backgrounds, such as computer graphics, computer vision and optimization, and machine learning.  This will hopefully push the state-of-the-art in “Capturing and modeling humans” in terms of models, methods and datasets.
 
The call for papers will be in the areas of
  • 3D Human pose and shape estimation from images, depth cameras or inertial sensors
  • 3D Hand pose estimation and tracking
  • Human body, hand and face modeling
  • 3D/4D Performance capture of bodies, faces and hands
  • Capture of people and clothing
  • Human body and hand models
  • Models of human soft-tissue
  • Registration of bodies, hands and faces
 
While the computer vision community has seen a lot of work on methods for detecting and tracking people in 2D much less work has focused in reasoning directly in 3D.  Hence, in PeopleCap, special emphasis will be given to methods that work in 3D and to methods that use a generative model. 

 

Submission

Submission Deadline
August 8
Reviews Due
August 16
​Notification of Acceptance
August 18
Camera-Ready Submission
August 23
Workshop
October 23
- All deadlines are 5 PM Pacific time.
- Paper submissions should follow the exact same guidelines of ICCV, 6-8 pages plus references, LaTeX template can be downloaded from here.
- Submissions can be uploaded to the CMT: https://cmt3.research.microsoft.com/PEOPLECAP2017
  1. If you do not have one already, create an account for cmt3 and login.
  2. In case you are not directed to PeopleCap submission, type PeopleCap in the search box to find it.
  3. Create a new submission and upload the main paper (and supplementary material if any).
- Accepted papers will be published in the proceedings of ICCV workshops. 
 

Invited Speakers

Picture
Michael Black, Max Planck for Intelligent Systems, Tübingen
Picture
Shahram Izadi, Perceptive IO (had to cancel trip to ICCV)
Picture
Anastasia Tkach, EPFL
Picture
Edmond Boyer, INRIA
Picture
Andrea Tagliasacchi, University of Victoria
Picture
Michael Zollhöfer, MPI for Informatics
Picture
Andrew Fitzgibbon, Microsoft
Picture
Christian Theobalt, MPI for Informatics, Saarbrücken (had to cancel trip to ICCV)
 

Program

13:30
Welcome and introduction
13:40
Learning Digital Humans by Capturing Real Ones
Michael Black
14:20
4D Modeling at INRIA
Edmond Boyer
15:00
VarPro, Lifting, and all that
Andrew Fitzgibbon
15:40
Poster session and coffee break
16:40
Capturing Reality Using Real-Time Optimization
Michael Zollhöfer
17:10
Sphere-Meshes for  Real-Time Hand Modeling and Tracking
Anastasia Tkach and Andrea Tagliasacchi
17:40
Closing remarks and best paper announcement
 

Organizers

Gerard Pons-Moll

Picture

Research Group Leader, MPI for Informatics

Jonathan Taylor

Picture

Senior Scientist, perceptiveIO

 

Papers


​Realtime Dynamic 3D Facial Reconstruction for Monocular Video In-the-Wild

Shuang Liu*; Zhao Wang; Xiaosong Yang; Jian.J Zhang
Symmetry-factored Statistical Modelling of Craniofacial Shape
Hang Dai*; William Smith; Nick Pears; Christian Duncan
4D Model-based Spatiotemporal Alignment of Scripted Taiji Quan Sequences
Jesse Scott*; Robert Collins; Christopher Funk; Yanxi Liu
Generating Multiple Diverse Hypotheses for Human 3D Pose Consistent with 2D Joint Detections
Ehsan Jahangiri*; Alan Yuille
Efficient Separation between Projected Patterns for Multiple Projector 3D People Scanning
Tomislav Petkovic*; Tomislav Pribanic; Matea Donlic; Peter Sturm
A Biophysical 3D Morphable Model of Face Appearance
Sarah Alotaibi; William Smith*
Towards Implicit Correspondence in Signed Distance Field Evolution
Miroslava Slavcheva*; Maximilian Baust; Slobodan Ilic (PeopleCap Best Paper Award)

Gallery
​

Picture
Mira Slavcheva receiving the best paper award for the paper "Towards Implicit Correspondence in Signed Distance Field Evolution".
Left to right: Jonathan Taylor, Mira Slavcheva, Gerard Pons-Moll
Imprint
Data Protection
Powered by Create your own unique website with customizable templates.
  • Home