Interactive Shape and Appearance Editing of Large 3D Models

Methods for modeling and texturing very large 3D models.

Introduction

TexturingFreeform Deformation

Large 3D models with complex geometry are mandatory for realistic image synthesis, scientifc studies and engineering. While 3D acquisition technologies allow to quickly and accurately capture complex geometries from real-world objects, they produce such a large amount of data that conventional interactive editing tools, such as Maya or 3D Studio Max, fail at handling them. In fact, most of the time, these models cannot be fully loaded in memory and relies on out-of-core and streaming computation techniques even for the very simple purpose of visualization. Obviously, offering the possibility not only to observe but also to modify the data comes as a non trivial challenge.

We have developed several research projects to address this problem, focusing on the two main interactions usually provided by interactive 3D editing tools: appearance (colorization) and shape (deformation) editing.

A Sampling Reconstruction Framework

The main idea behind our approach to size-insensitive interactive editing can be stated as follow:
Editing a subsampling provides a subsampling of the editing.
In other word, when modifying a subset of the data, we can derive from the two versions (original and modified) of this subset a global function capturing the "essence" of the modification, in our case colorization and deformation. Therefore, this function can be used in a streaming process to transmit a modification undergone by a simplified model (fitting in memory and small enough for preserving interactivity) to its original, large version. This global framework is called the "Sampling-Reconstruction Framework" or SRF, and illustrated in the following figure:

Sampling-Reconstruction Framework

We use point-sampled representation internally to not rely on the input mesh quality.

Out-of-Core Sampling

We combine two techniques for performing the inital simplification:

This combined algorithm is in some sense fully adaptive to data stream, topology and geometry:

Out-Of-Core Sampling

As a result, an adaptive sampling is produced using various error metrics such as L2 or L2,1, and used for interactive editing. Since we seek for a generic solution to the problem of size-insensitive editing, the SRF is not aware of any particular feature of the actual editing tool used. We used in our test Blender and PointShop3D.

Out-Of-Core Sampling

Interactive Texturing

With the general SRF setting, texturing of large models is cast as a streaming colorization, where the simplified model, once textured, is "used" as a 3D texture itself, a Point-Sampled Texture to be applied on the large version in the post-streaming.

interactive out-of-core texturing interactive out-of-core texturing

The Point-Sampled Texture defined by the low resolution in-core model can be used in several rendering techniques, including raytracing (Vase-Lion picture on the right).

Interactive Freeform Deformation

We derive an agressive local encoding of deformation for quickly streaming deformation from the simplified to the original model:

local encoding

This encoding provides rotation-invariant deformation reconstruction onto the original large model and is fast enough for processing hundreds of millions polygons in few minutes.

interactive out-of-core deformation

Here is another example with a model made of 28 millions polygons.

interactive out-of-core deformation

Multi-scale Out-of-Core Editing

The SRF can also be used for multi-scale editing: the user starts with a very small sampling, edit it, and then use the SRF out-of-core sampling machinery to refine selected parts, appling to new samples the modification performed so far at lower resolution. Finer editing (either appearance or shape) can then be performed. In some sense, this reproduces the usual workflow of subdivision modeling packages.

interactive multiscale out-of-core deformation

Rapid Visualization

The so-edited large models can be used with offline rendering package such as Mental Ray or Renderman, or pre-processed and rendered interactively with static out-of-core rendering systems (sse the recent tutorial of Dietrich et al.). However, if a quick preview is mandatory, we have developed a fast appearance preserving conversion providing high-quality normal-mapped rendering in real-time with only few seconds to few minutes of preprocessing.

interactive multiscale out-of-core deformation

Here, the Atlas model (500 000 000 polygons, Digital Michelangelo Project). Our algorithm performs a mesh-less out-of-core conversion, resulting in a reduced set of polygons equipped with high resolution normal maps. This conversion is done in streaming, without any full-resolution surface reconstruction nor global parameterization. Mesh and normal map resolutions are specified by the user: in this example, generated in about 20 minutes, the final mesh size is rougly 40K triangles, while the normal map takes 32 MB on graphics board memory (no compression). The algorithm works for meshes and point clouds.

Publications

SIMOD: Making Freeform Deformation Size-Insensitive
Tamy Boubekeur, Olga Sorkine and Christophe Schlick.
IEEE/Eurographics Symposium on Point-Based Graphics

Scalable Freeform Deformation
Tamy Boubekeur, Olga Sorkine and Christophe Schlick.
ACM SIGGRAPH 2007 Sketch program

Interactive Out-Of-Core Texturing Using Point-Sampled Textures
Tamy Boubekeur and Christophe Schlick.
IEEE/Eurographics Point-Based Graphics 2006

Interactive Out-Of-Core Texturing
Tamy Boubekeur and Christophe Schlick.
ACM SIGGRAPH 2006 Sketch Program

Rapid Visualization of Large Point-Based Surfaces
Tamy Boubekeur, Florent Duguet and Christophe Schlick.
Eurographics VAST 2005