Browsing by Author "Park, Jaesik"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Global Texture Mapping for Dynamic Objects(The Eurographics Association and John Wiley & Sons Ltd., 2019) Kim, Jungeon; Kim, Hyomin; Park, Jaesik; Lee, Seungyong; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonWe propose a novel framework to generate a global texture atlas for a deforming geometry. Our approach distinguishes from prior arts in two aspects. First, instead of generating a texture map for each timestamp to color a dynamic scene, our framework reconstructs a global texture atlas that can be consistently mapped to a deforming object. Second, our approach is based on a single RGB-D camera, without the need of a multiple-camera setup surrounding a scene. In our framework, the input is a 3D template model with an RGB-D image sequence, and geometric warping fields are found using a state-of-the-art non-rigid registration method [GXW*15] to align the template mesh to noisy and incomplete input depth images. With these warping fields, our multi-scale approach for texture coordinate optimization generates a sharp and clear texture atlas that is consistent with multiple color observations over time. Our approach is accelerated by graphical hardware and provides a handy configuration to capture a dynamic geometry along with a clean texture atlas. We demonstrate our approach with practical scenarios, particularly human performance capture. We also show that our approach is resilient on misalignment issues caused by imperfect estimation of warping fields and inaccurate camera parameters.Item Spatiotemporal Texture Reconstruction for Dynamic Objects Using a Single RGB-D Camera(The Eurographics Association and John Wiley & Sons Ltd., 2021) Kim, Hyomin; Kim, Jungeon; Nam, Hyeonseo; Park, Jaesik; Lee, Seungyong; Mitra, Niloy and Viola, IvanThis paper presents an effective method for generating a spatiotemporal (time-varying) texture map for a dynamic object using a single RGB-D camera. The input of our framework is a 3D template model and an RGB-D image sequence. Since there are invisible areas of the object at a frame in a single-camera setup, textures of such areas need to be borrowed from other frames. We formulate the problem as an MRF optimization and define cost functions to reconstruct a plausible spatiotemporal texture for a dynamic object. Experimental results demonstrate that our spatiotemporal textures can reproduce the active appearances of captured objects better than approaches using a single texture map.