Human4d

Human4d

Workshop2021

From Human4d
Jump to navigation Jump to search

Program - Journée scientifique "Human4D"

1st day: 5 mai 2021

Recorded presentations: bbb

Time Speaker Affiliation Title & short abstract
9h00-9h40 Edmond Boyer INRIA Grenoble Data Driven 3D Modeling from Images. In this presentation I will discuss recent trends in the field of 3D modeling from images with data driven strategies. In particular, neural implicit representation has emerged over the past few years as a popular tool to model 3D shapes. I will discuss strenghts and weaknesses of such representations through recent works that follow this research direction.
9h45-10h25 Emery Pierson CRIStAL- Université de Lille Projection-based Classification of Surfaces for 3D Human Mesh Sequence Retrieval. We analyze human poses and motion by introducing three sequences of easily calculated surface descriptors that are invariant under reparametrizations and Euclidean transformations. These descriptors are obtained by associating to each finitely-triangulated surface two functions on the unit sphere: for each unit vector $u$ we compute the weighted area of the projection of the surface onto the plane orthogonal to $u$ and the length of its projection onto the line spanned by $u$. The $L_2$ norms and inner products of the projections of these functions onto the space of spherical harmonics of order $k$ provide us with three sequences of Euclidean and reparametrization invariants of the surface. The use of these invariants reduces the comparison of 4D to the comparison of polygonal curves in $\R^n$. The experimental results on the artificial datasets are promising. Moreover, a slight modification of our method yields good results on the noisy CVSSP3D real dataset.
10h30-11h10 Hyewon Seo ICube- Université de Strasbourg Dynamic skin deformation prediction by recurrent neural network. We present a learning-based method for dynamic skin deformation. At the core of our work is a recurrent neural network that learns to predict the nonlinear, dynamics-dependent shape change over time from pre-existing mesh deformation sequence data. Our network also learns to predict the variation of skin dynamics across different individuals with varying body shapes. After training the network delivers realistic, high-quality skin dynamics that are specific to a person in a real-time course. We obtain results that significantly saves the computational time, while maintaining comparable prediction quality compared to state-of-the-art results.
11h10-11h30 All Discussion

2nd day: 10 mai 2021

Recorded presentations: zoom cloud (Passcode: jB2.yp0M)

Time Speaker Affiliation Title & short abstract
14h00-14h40 Clément Lemeunier LIRIS-Université de Lyon 3D to 4D spectral mesh generation via latent space representation. We aim to generate unseen surfaces during training by constructing a latent space that preserves spatial metrics. Following a VAE-like architecture, the goal is to feed the neural network with the spectrums of the training shapes, thus working only in the spectral domain. The challenge is to teach this latent space how to preserve intrinsic information while interpolating. Also, this latent space should contain separately extrinsic and intrinsic information, making it possible to interpolate either the shape or the pose between two meshes. While the first step is making this VAE work in 3D, the final objective is to work on mesh sequences, in order to have the ability to generate new motions unseen during training.
14h45-15h25 Pierre Galmiche ICube- Université de Strasbourg Functional Maps for 3D shape analysis and modeling. We will give an introduction to the functional map framework and present our on-going work on 3D shape analysis and modeling.
15h25-16h10 Naima Otberdout CRIStAL- Université de Lille 3D to 4D Facial Expressions Generation Guided by Landmarks. We propose a novel solution to the following question: given one input 3D neutral face, can we generate dynamic 3D (4D) facial expressions from it? To tackle this problem, we first propose a mesh encoder-decoder architecture (Expr-ED) that exploits a set of 3D landmarks to generate an expressive 3D face from its neutral counterpart. Then, we extend it to 4D by modeling the temporal dynamics of facial expressions using a manifold-valued GAN capable of generating a sequence of 3D landmarks from an expression label (Motion3DGAN). The generated landmarks are fed into the mesh encoder-decoder, ultimately producing a sequence of 3D expressive faces. By decoupling the two steps, we separately address the non-linearity induced by the mesh deformation and motion dynamics. The experimental results on the CoMA dataset show that our mesh encoder-decoder guided by landmarks brings a significant improvement with respect to other landmark-based 3D fitting approaches, and that we can generate high quality dynamic facial expressions.
16h10-16h30 All Discussion