Reconstructing Animated Meshes from Time-Varying Point Clouds

dc.contributor.authorSuessmuth, Jochenen_US
dc.contributor.authorWinter, Marcoen_US
dc.contributor.authorGreiner, Guentheren_US
dc.date.accessioned2015-02-21T17:32:32Z
dc.date.available2015-02-21T17:32:32Z
dc.date.issued2008en_US
dc.description.abstractIn this paper, we describe a novel approach for the reconstruction of animated meshes from a series of time-deforming point clouds. Given a set of unordered point clouds that have been captured by a fast 3-D scanner, our algorithm is able to compute coherent meshes which approximate the input data at arbitrary time instances. Our method is based on the computation of an implicit function in R?4 that approximates the time-space surface of the time-varying point cloud. We then use the four-dimensional implicit function to reconstruct a polygonal model for the first time-step. By sliding this template mesh along the time-space surface in an as-rigid-as-possible manner, we obtain reconstructions for further time-steps which have the same connectivity as the previously extracted mesh while recovering rigid motion exactly.The resulting animated meshes allow accurate motion tracking of arbitrary points and are well suited for animation compression. We demonstrate the qualities of the proposed method by applying it to several data sets acquired by real-time 3-D scanners.en_US
dc.description.number5en_US
dc.description.seriesinformationComputer Graphics Forumen_US
dc.description.volume27en_US
dc.identifier.doi10.1111/j.1467-8659.2008.01287.xen_US
dc.identifier.issn1467-8659en_US
dc.identifier.pages1469-1476en_US
dc.identifier.urihttps://doi.org/10.1111/j.1467-8659.2008.01287.xen_US
dc.publisherThe Eurographics Association and Blackwell Publishing Ltden_US
dc.titleReconstructing Animated Meshes from Time-Varying Point Cloudsen_US
Files