Learning Generative Models of Shape Handles

Matheus Gadelha, Giorgio Gori, Duygu Ceylan, Radomir Mech, Nathan Carr, Tamy Boubekeur, Rui Wang and Subhransu Maji
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020

Learning Generative Models of Shape Handles
Example of latent space interpolation using our model.

Abstract

We present a generative model to synthesize 3D shapes as sets of handles — lightweight proxies that approximate the original 3D shape — for applications in interactive editing, shape parsing, and building compact 3D representations. Our model can generate handle sets with varying cardinality and different types of handles. Key to our approach is a deep architecture that predicts both the parameters and existence of shape handles, and a novel similarity measure that can easily accommodate different types of handles, such as cuboids or sphere-meshes. We leverage the recent advances in semantic 3D annotation as well as automatic shape summarizing techniques to supervise our approach. We show that the resulting shape representations are intuitive and achieve superior quality than previous state-of-the-art. Finally, we demonstrate how our method can be used in applications such as interactive shape editing, completion, and interpolation, leveraging the latent space learned by our model to guide these tasks.

Downloads

Bibtex

@inproceedings{SMS:2020:LLTJBO, 
  title = "Learning Generative Models of Shape Handles", 
  author = "Matheus Gadelha and Giorgio Gori and Duygu Ceylan and Radomir Mech and Nathan Carr and Tamy Boubekeur 
  and Rui Wang and Subhransu Maji", 
  year = "2020", 
  booktitle = "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
  pages  = "399--408",
}