3D Reconstruction Pipeline

Description

This work presents a new 3D reconstruction pipeline built for digital preservation of natural and cultural assets [Vrubel et al. 2009]. This application requires high quality results, making time and space constraints less important than the achievable precision. Besides the high quality models generated, our work allows an overview of the entire reconstruction process, from range image acquisition to the texture generation. Several steps compose a complete 3D reconstruction pipeline. These steps are illustrated at the figure bellow, and described in the following sections.

3D Reconstruction Pipeline


Acquisition

The initial step in the 3D digitalization of an object is the study of its characteristics, like its shape and texture. To generate the 3D model is necessary to obtain images of different views (visible regions through a certain perspective) of the object, so that all visible regions are registered by the image acquisition system. In this image acquisition step are used digital cameras and range scanners that permit the capture of the 3D information about the object's visible surfaces.

In the acquisition process are obtained several object's views, like the following image shows. In this process is important to keep an overlapping area to help the alignment process between the views.

Vista de um objeto Vista de um objeto Vista de um objeto

The image bellow shows an example of the view's acquisition process. The system is composed of a range laser scanner on a tripod. The scanner is connected to a computer that contains the scanner control software. The acquired images are stored and used on the 3D object's reconstruction.

Exemplo do processo de aquisição de uma vista Exemplo do processo de aquisição de uma vista Exemplo do processo de aquisição de uma vista


The acquisition system used at IMAGO is basically composed of a laser sensor and a CCD camera. The equipment obtains the tridimensional position of each point in the object's surface through a scanning system, generating range images.


Registration

The objective of this stage is to find a 4 X 4 transformation matrix for each captured view to achieve the alignment into a common object coordinate system. In our pipeline, we use a pairwise ICP alignment [Rusinkiewicz and Levoy 2001], followed by a global registration step using the algorithm of Pulli [Pulli 1999]. For each pair of neighboring views with sufficient overlap, we find the transformation matrix that aligns the second view with the first, using a modified version of the ICP algorithm. Currently, we manually pre-align the views; however, automatic pre-alignment techniques can be used to improve this task.

Our contribution regarding this stage is a new two-phase ICP algorithm. We needed an algorithm with good convergence properties (to reach the correct alignment), and with maximum precision. To achieve this, the first phase uses an ICP variant with the point-to-plane error metric, a closest-compatible approach for pair generation, normal space sampling with random selection of points on both views, and rejection of the farthest pairs. This promotes excellent convergence, but with limited precision.

When this first phase converges, we move on to the second phase of the ICP algorithm. Now, we use all points on both views, adding a maximum pair distance into the pair compatibility test. This distance is related to the scanner error, and is usually very small (e.g. around 0.7mm). We still use a point-to-plane error metric during error minimization. This version of ICP has limited convergence, but excellent precision. As the first phase already reached an almost optimal alignment, the second phase just improves the precision of the result. As the compatibility threshold is very restrictive, we achieve good outlier elimination, something essential for a precise alignment.


Mesh Integration

After the registration, we have several overlapping partial meshes, one for each captured view. The next stage of the reconstruction pipeline must integrate them to build a single triangle mesh for the object. There are several approaches for mesh integration, and we chose the volumetric one.

We developed a new algorithm that combines elements from both VRIP [Curless and Levoy 1996] and Consensus Surfaces [Wheeler et al. 1998]. Our new algorithm is based on two phases. In the first one, we use a slightly modified version of VRIP, together with a space carving method, to generate an initial volumetric representation. Our modification on VRIP is a new weight curve, that gives more weight to outside voxels than to the ones inside the objects.

This attenuates the artifacts of VRIP in corners and thin surfaces, at the cost of a small misplacement of the surfaces in the first phase. Our space carving takes into consideration only the object data, and optionally the support planes detected in the acquisition stage, having as main goal the outlier elimination. The volumetric result of this first phase works as a consensual basis for the second phase of the algorithm.

The second phase builds the definitive volumetric representation, integrating only measurements in consensus with the result obtained in the first phase. The consensus is tested at each candidate voxel, between the normal on the closest surface point of each view and the gradient of the volumetric result from the first phase. The space carving performed on the first phase is also used to eliminate outliers, here standing for the incorrect data outside the object. We must note that we use line-of-sight signed distances on the first phase (VRIP) for performance, and Euclidian distances on the second phase (Consensus) for precision and correction of the hole filling later on. With our algorithm, we eliminate the artifacts of VRIP near corners and thin surfaces, and generate good results near occluded regions.


Hole Filling

The acquisition process is usually incomplete. Deep recesses and occlusions prevent the capture of some parts of the objects. This requires some methods to complete the captured data to allow the generation of a watertight model,necessary for several applications such as user visualization and creation of replicas.

In our pipeline we chose the volumetric diffusion algorithm by [Davis et al. 2002], because it can handle complex topologies satisfactorily. Besides, it is a volumetric technique that works well with our mesh integration stage. The idea of the algorithm is to diffuse the values on observed voxels into voxels without data, similar to a 3D blurring operation. Space-carving information, although not necessary, usually helps the algorithm produce a more faithful reconstruction.


Mesh Generation

We use the well established Marching Cubes algorithm [Lorensen and Cline 1987] to generate a triangle mesh from the volumetric representation of the previous stages. We use the disambiguation method of Chernyaev [Lewiner et al. 2003] to ensure the generation of manifold topologies.

The only drawback of this approach is the generation of very thin triangles (a.k.a. slivers) in some parts of the generated model. A mesh simplification technique like [Hoppe 1999] can eliminate these triangles, resulting in a more homogeneous mesh, useful for the next stages of the pipeline.


Texture Parametrization

The mesh generation concluded the geometric part of the reconstruction problem. However, we still needed to calculate the surface properties (i.e. color and specularity). These properties are usually represented by textures. Therefore, we need to be able to apply textures to the generated model.

We used a simple texture atlas approach. We segmented the model into almost planar regions, starting from a seed triangle and growing the region while the normals of the faces are within a threshold (usually 30 degrees, to prevent the generation of too many small regions) from the average normal of the region. Each region is then planified; this is done calculating the principal axis of the vertices in question [Wu 1992]. The axis closest to the average normal of the region is then used as the normal of the plane, and the other two axes define the u and v directions in texture space. The result is a 2D projection (in mm) of each region. After all regions are planified, a texture atlas is generated, packing all regions into a single parametrization space (see figure bellow). As we know the size of each region in millimeters, it is easy to define the image size in pixels necessary to achieve a desired resolution in pixels per millimeter.

 

Exemplo de parametrização de textura Exemplo de parametrização de textura

Example of an automatically generated texture for the object.


Surface Properties Calculation

Our current implementation uses a vertex color approach, that generates color and illumination per vertex. To generate the vertex colors, we calculate a weighted average of the colors on all views that observe each vertex. The weight we adopted is the angle between the scanner line-of-sight and the vertex normal. This is done because the data observed at an angle are less reliable than the data facing directly the scanner. Although simple, our method generates good results.


Texture Generation

Texture generation combines the results of the two previous stages of the pipeline: texture parametrization and surface properties. Our objective is to encode the surface properties into one or more images (i.e. textures). These will be used when rendering the reconstructed model.

As we explicitly generate the parametrization and vertex color, the texture generation is straightforward. We render the model using the texture coordinates (u; v; 0.0) as the (x; y; z) coordinates of the vertices, and using the calculated vertex colors. The 3D graphics card interpolates the color across each face using Gouraud shading [Foley et al. 1996]. We use an orthogonal projection matrix, and a render target of the size of the desired texture. The same technique can be used to generate other textures, like a normal map (encoding each vertex normal as a RGB triplet), or a specular map (encoding each vertex specular color and exponent as a RGBA tuple).


Mesh Simplification

After the textures are generated, an optional stage consists of reducing the triangle count on the model to improve its rendering performance and storage costs, while preserving quality and precision. When dealing with digital preservation, this step is not essential, since we want precise results. However, the Marching Cubes algorithm used in the pipeline can generate much more triangles than necessary to accurately represent the model, mainly in almost planar regions. So, a mesh simplification procedure can improve the performance keeping high accuracy. Another important fact is that we are able to generate a normal map for the model that helps preserve the visual accuracy even when low-poly models are used.

There are several approaches for mesh simplification. The technique of Garland and Heckbert [Garland and Heckbert 1997], improved by Hoppe [Hoppe 1996] is fast and generates accurate results when reducing moderately the polygon count, which is the goal of digital preservation. Using a progressive mesh representation [Hoppe 1996] is also useful to allow the generation of different levels of detail for each object. We prefer to perform the mesh simplification after the texture generation, because we are able to generate maximum quality normal maps, and the textures can guide the mesh simplification, thus minimizing texture distortion.


Results

We used our pipeline to reconstruct several objects, ranging from artworks to fossils. The objects were selected to stress test the pipeline, with complex topologies and optically uncooperative materials. Our results are shown at our 3D Virtual Museum.


References

• A. Vrubel, O. R. P. Bellon, L. Silva: A 3D Reconstruction Pipeline for Digital Preservation, To appear in Proc. IEEE Conference on Computer Vision and Pattern Recognition, 2009.

• S. Rusinkiewicz e M. Levoy. Efficient variants of the ICP algorithm. In Proc. 3DIM, páginas 145–152, 2001.

• K. Pulli. Multiview registration for large data sets. In Proc. 3DIM, páginas 160–168, 1999.

• B. Curless e M. Levoy. A volumetric method for building complex models from range images. In Proc. SIGGRAPH, páginas 303–312, 1996.

• M. D. Wheeler, Y. Sato, e K. Ikeuchi. Consensus surfaces for modeling 3D objects from multiple range images. In Proc. ICCV, páginas 917–924, 1998.

• J. Davis, S. R. Marschner, M. Garr, e M. Levoy. Filling holes in complex surfaces using volumetric diffusion. Proc. 3DPVT, páginas 428–438, 2002.

• W. E. Lorensen e H. E. Cline. Marching cubes: A high resolution 3D surface construction algorithm. In Proc. SIGGRAPH, páginas 163–169, 1987.

• T. Lewiner, H. Lopes, A.W. Vieira, e G. Tavares. Efficient implementation of marching cubes' cases with topological guarantees. Journal of Graphics Tools, 8(2):1–15, 2003.

• H. Hoppe. New quadric metric for simplifying meshes with appearance attributes. In Proc. IEEE Visualization, páginas 59–66, 1999.

• X. Wu. A linear-time simple bounding volume algorithm. Graphics Gems III, páginas 301–306, 1992.

• J. D. Foley, A. van Dam, S. K. Feiner, e J. F. Hughes. Computer Graphics (2nd ed. in C): Principles and Practice. Addison-Wesley Publishing, 1996.

• M. Garland e P. Heckbert. Simplification using quadric error metrics. In Proc. SIGGRAPH, volume 31, páginas 209–216, 1997.