EGWR02: 13th Eurographics Workshop on Rendering
Permanent URI for this collection
Browse
Browsing EGWR02: 13th Eurographics Workshop on Rendering by Issue Date
Now showing 1 - 20 of 29
Results Per Page
Sort Options
Item Approximate Soft Shadows on Arbitrary Surfaces using PenumbraWedges(The Eurographics Association, 2002) Akenine-Möller, Tomas; Assarsson, Ulf; P. Debevec and S. GibsonShadow generation has been subject to serious investigation in computer graphics, and many clever algorithms have been suggested. However, previous algorithms cannot render high quality soft shadows onto arbitrary, animated objects in real time. Pursuing this goal, we present a new soft shadow algorithm that extends the standard shadow volume algorithm by replacing each shadow quadrilateral with a new primitive, called the penumbra wedge. For each silhouette edge as seen from the light source, a penumbra wedge is created that approximately models the penumbra volume that this edge gives rise to. Together the penumbra wedges can render images that often are remarkably close to more precisely rendered soft shadows. Furthermore, our new primitive is designed so that it can be rasterized efficiently. Many real-time algorithms can only use planes as shadow receivers, while ours can handle arbitrary shadow receivers. The proposed algorithm can be of great value to, e.g., 3D computer games, especially since it is highly likely that this algorithm can be implemented on programmable graphics hardware coming out within the next year, and because games often prefer perceptually convincing shadows.Item Spatio-Temporal View Interpolation(The Eurographics Association, 2002) Vedula, Sundar; Baker, Simon; Kanade, Takeo; P. Debevec and S. GibsonWe propose a fully automatic algorithm for view interpolation of a completely non-rigid dynamic event across both space and time. The algorithm operates by combining images captured across space to compute voxel models of the scene shape at each time instant, and images captured across time to compute the "scene flow" between the voxel models. The scene-flow is the non-rigid 3D motion of every point in the scene. To interpolate in time, the voxel models are "flowed" using an appropriate multiple of the scene flow and a smooth surface fit to the result. The novel image is then computed by ray-casting to the surface at the intermediate time instant, following the scene flow to the neighboring time instants, projecting into the input images at those times, and finally blending the results. We use our algorithm to create re-timed slow-motion fly-by movies of dynamic real-world events.Item A Tone Mapping Algorithm for High Contrast Images(The Eurographics Association, 2002) Ashikhmin, Michael; P. Debevec and S. GibsonA new method is presented that takes as an input a high dynamic range image and maps it into a limited range of luminance values reproducible by a display device. There is significant evidence that a similar operation is performed by early stages of human visual system (HVS). Our approach follows functionality of HVS without attempting to construct its sophisticated model. The operation is performed in three steps. First, we estimate local adaptation luminance at each point in the image. Then, a simple function is applied to these values to compress them into the required display range. Since important image details can be lost during this process, we then re-introduce details in the final pass over the image.Item Microfacet Billboarding(The Eurographics Association, 2002) Yamazaki, Shuntaro; Sagawa, Ryusuke; Kawasaki, Hiroshi; Ikeuchi, Katsushi; Sakauchi, Masao; P. Debevec and S. GibsonRendering of intricately shaped objects that are soft or cluttered is difficult because we cannot accurately acquire their complete geometry. Since their geometry varies drastically, modeling them using fixed facets can lead to severe artifacts when viewed from singular directions. In this paper, we propose a novel modeling method, "microfacet billboarding", which uses view-dependent "microfacets" with view-dependent textures. The facets discretely approximate the geometry of the object and are aligned perpendicular to the viewing direction. The texture of each facet is selected from the most suitable texture images according to the viewpoint. Microfacet billboarding can render intricate geometry from various viewpoints. We first describe the basic algorithm of microfacet billboarding. Also, we predict artifacts generated due to the use of discrete facets and we analyze the necessary sampling interval of the geometry and texture for regarding the artifacts as negligible. In addition to the modeling method, we have implemented a real-time renderer by a hardware-accelerated technique. To evaluate the efficiency of our method, we compared it with traditional texture mapping to a mesh model, and showed that our method has a great advantage over the former in rendering intricately shaped objects.Item Signal-Specialized Parametrization(The Eurographics Association, 2002) Sander, Pedro V.; Gortler, Steven J.; Snyder, John; Hoppe, Hugues; P. Debevec and S. GibsonTo reduce memory requirements for texture mapping a model, we build a surface parametrization specialized to its signal (such as color or normal). Intuitively, we want to allocate more texture samples in regions with greater signal detail. Our approach is to minimize signal approximation error - the difference between the original surface signal and its reconstruction from the sampled texture. Specifically, our signal-stretch parametrization metric is derived from a Taylor expansion of signal error. For fast evaluation, this metric is pre-integrated over the surface as a metric tensor. We minimize this nonlinear metric using a novel coarse-tofine hierarchical solver, further accelerated with a fine-to-coarse propagation of the integrated metric tensor. Use of metric tensors permits anisotropic squashing of the parametrization along directions of low signal gradient. Texture area can often be reduced by a factor of 4 for a desired signal accuracy compared to nonspecialized parametrizations.Item Time Dependent Photon Mapping(The Eurographics Association, 2002) Cammarano, Mike; Jensen, Henrik Wann; P. Debevec and S. GibsonThe photon map technique for global illumination does not specifically address animated scenes. In particular, prior work has not considered the problem of temporal sampling (motion blur) while using the photon map. In this paper we examine several approaches for simulating motion blur with the photon map. In particular we show that a distribution of photons in time combined with the standard photon map radiance estimate is incorrect, and we introduce a simple generalization that correctly handles photons distributed in both time and space. Our results demonstrate that this time dependent photon map extension allows fast and correct estimates of motion-blurred illumination including motion-blurred caustics.Item Towards Real-Time Texture Synthesis with the Jump Map(The Eurographics Association, 2002) Zelinka, Steve; Garland, Michael; P. Debevec and S. GibsonWhile texture synthesis has been well-studied in recent years, real-time techniques remain elusive. To help facilitate real-time texture synthesis, we divide the task of texture synthesis into two phases: a relatively slow analysis phase, and a real-time synthesis phase. Any particular texture need only be analyzed once, and then an unlimited amount of texture may be synthesized in real-time. Our analysis phase generates a jump map, which stores for each input pixel a set of matching input pixels (jumps). Texture synthesis proceeds in real-time as a random walk through the jump map. Each new pixel is synthesized by extending the patch of input texture from which one of its neighbours was copied. Occasionally, a jump is taken through the jump map to begin a new patch. Despite the method s extreme simplicity, its speed and output quality compares favourably with recent patch-based algorithms.Item Real-Time Halftoning: A Primitive For Non-Photorealistic Shading(The Eurographics Association, 2002) Freudenberg, Bert; Masuch, Maic; Strothotte, Thomas; P. Debevec and S. GibsonWe introduce halftoning as a general primitive for real-time non-photorealistic shading. It is capable of producing a variety of rendering styles, ranging from engraving with lighting-dependent line width to pen-and-ink style drawings using prioritized stroke textures. Since monitor resolution is limited we employ a smooth threshold function that provides stroke antialiasing. By applying the halftone screen in texture space and evaluating the threshold function for each pixel we can influence the shading on a pixel-by-pixel basis. This enables many effects to be used, including indication mapping and individual stroke lighting. Our real-time halftoning method is a drop-in replacement for conventional multitexturing and runs on commodity hardware. Thus, it is easy to integrate in existing applications, as we demonstrate with an artistically rendered level in a game engine.Item Fast Primitive Distribution for Illustration(The Eurographics Association, 2002) Secord, Adrian; Heidrich, Wolfgang; Streit, Lisa; P. Debevec and S. GibsonIn this paper we present a high-quality, image-space approach to illustration that preserves continuous tone by probabilistically distributing primitives while maintaining interactive rates. Our method allows for frame-to-frame coherence by matching movements of primitives with changes in the input image. It can be used to create a variety of drawing styles by varying the primitive type or direction. We show that our approach is able to both preserve tone and (depending on the drawing style) high-frequency detail. Finally, while our algorithm requires only an image as input, additional 3D information enables the creation of a larger variety of drawing styles.Item GigaWalk: Interactive Walkthrough of Complex Environments(The Eurographics Association, 2002) III, William V. Baxter; Sud, Avneesh; Govindaraju, Naga K.; Manocha, Dinesh; P. Debevec and S. GibsonWe present a new parallel algorithm and a system, GigaWalk, for interactive walkthrough of complex, gigabytesized environments. Our approach combines occlusion culling and levels-of-detail and uses two graphics pipelines with one or more processors. GigaWalk uses a unified scene graph representation for multiple acceleration techniques, and performs spatial clustering of geometry, conservative occlusion culling, and load-balancing between graphics pipelines and processors. GigaWalk has been used to render CAD environments composed of tens of millions of polygons at interactive rates on systems consisting of two graphics pipelines. Overall, our system s combination of levels-of-detail and occlusion culling techniques results in significant improvements in frame-rate over view-frustum culling or either single technique alone.Item Hardware-Accelerated Point-Based Rendering of Complex Scenes(The Eurographics Association, 2002) Coconu, Liviu; Hege, Hans-Christian; P. Debevec and S. GibsonHigh quality point rendering methods have been developed in the last years. A common drawback of these approaches is the lack of hardware support. We propose a novel point rendering technique that yields good image quality while fully making use of hardware acceleration. Previous research revealed various advantages and drawbacks of point rendering over traditional rendering. Thus, a guideline in our algorithm design has been to allow both primitive types simultaneously and dynamically choose the best suited for rendering. An octree-based spatial representation, containing both triangles and sampled points, is used for level-of-detail and visibility calculations. Points in each block are stored in a generalized layered depth image. McMillan s algorithm is extended and hierarchically applied in the octree to warp overlapping Gaussian fuzzy splats in occlusion-compatible order and hence z-buffer tests are avoided. We show how to use off-the-shelf hardware to draw elliptical Gaussian splats oriented according to normals and to perform texture filtering. The result is a hybrid polygon-point system with increased efficiency compared to previous approaches.Item Efficient High Quality Rendering of Point Sampled Geometry(The Eurographics Association, 2002) Botsch, Mario; Wiratanaya, Andreas; Kobbelt, Leif; P. Debevec and S. GibsonWe propose a highly efficient hierarchical representation for point sampled geometry that automatically balances sampling density and point coordinate quantization. The representation is very compact with a memory consumption of far less than 2 bits per point position which does not depend on the quantization precision. We present an efficient rendering algorithm that exploits the hierarchical structure of the representation to perform fast 3D transformations and shading. The algorithm is extended to surface splatting which yields high quality anti-aliased and water tight surface renderings. Our pure software implementation renders up to 14 million Phong shaded and textured samples per second and about 4 million anti-aliased surface splats on a commodity PC. This is more than a factor 10 times faster than previous algorithms.Item Appearance based object modeling using texture database: Acquisition, compression and rendering(The Eurographics Association, 2002) Furukawa, R.; Kawasaki, H.; Ikeuchi, K.; Sakauchi, M.; P. Debevec and S. GibsonImage-based object modeling can be used to compose photorealistic images of modeled objects for various rendering conditions, such as viewpoint, light directions, etc. However, it is challenging to acquire the large number of object images required for all combinations of capturing parameters and to then handle the resulting huge data sets for the model. This paper presents a novel modeling method for acquiring and preserving appearances of objects. Using a specialized capturing platform, we first acquire objects geometrical information and their complete 4D indexed texture sets, or bi-directional texture functions (BTF) in a highly automated manner. Then we compress the acquired texture database using tensor product expansion. The compressed texture database facilitates rendering objects with arbitrary viewpoints, illumination, and deformation.Item Acquisition and Rendering of Transparent and Refractive Objects(The Eurographics Association, 2002) Matusik, Wojciech; Pfister, Hanspeter; Ziegler, Remo; Ngan, Addy; McMillan, Leonard; P. Debevec and S. GibsonThis paper introduces a new image-based approach to capturing and modeling highly specular, transparent, or translucent objects. We have built a system for automatically acquiring high quality graphical models of objects that are extremely difficult to scan with traditional 3D scanners. The system consists of turntables, a set of cameras and lights, and monitors to project colored backdrops. We use multi-background matting techniques to acquire alpha and environment mattes of the object from multiple viewpoints. Using the alpha mattes we reconstruct an approximate 3D shape of the object. We use the environment mattes to compute a high-resolution surface reflectance field. We also acquire a low-resolution surface reflectance field using the overhead array of lights. Both surface reflectance fields are used to relight the objects and to place them into arbitrary environments. Our system is the first to acquire and render transparent and translucent 3D objects, such as a glass of beer, from arbitrary viewpoints under novel illumination.Item Synthesizing Bark(The Eurographics Association, 2002) Lefebvre, Sylvain; Neyret, Fabrice; P. Debevec and S. GibsonDespite the high quality reached by today s CG tree generators, there exists no realistic model for generating the appearance of bark: simple texture maps are generally used, showing obvious flaws if the tree is not entirely painted by an artist. Beyond modeling the appearance of bark, difficulties lies in adapting the bark features to the age of each branch, ensuring continuity between adjacent parts of the tree, and possibly ensuring continuity through time. We propose a model of bark generation which produces either geometry or texture, and is dedicated to the widespread family of fracture-based bark. Given that the tree growth is mostly on its circumference, we consider circular strips of bark on which fractures can appear, propagate these fractures to the other strips, and enlarge them with time. Our semi-empirical model runs in interactive time, and allows automatic or influenced bark generation with parameters that are intuitive for the artist. Moreover we can simulate many different instances of the same bark family. In the paper, our generated bark is compared (favourably) to real bark.Item Video Flashlights - Real Time Rendering of Multiple Videos for Immersive Model Visualization(The Eurographics Association, 2002) Sawhney, H. S.; Arpa, A.; Kumar, R.; Samarasekera, S.; Aggarwal, M.; Hsu, S.; Nister, D.; Hanna, K.; P. Debevec and S. GibsonVideos and 3D models have traditionally existed in separate worlds and as distinct representations. Although texture maps for 3D models have been traditionally derived from multiple still images, real-time mapping of live videos as textures on 3D models has not been attempted. This paper presents a system for rendering multiple live videos in real-time over a 3D model as a novel and demonstrative application of the power of commodity graphics hardware. The system, metaphorically called the Video Flashlight system, "illuminates" a static 3D model with live video textures from static and moving cameras in the same way as a flashlight (torch) illuminates an environment. The Video Flashlight system is also an augmented reality solution for security and monitoring systems that deploy numerous cameras to monitor a large scale campus or an urban site. Current video monitoring systems are highly limited in providing global awareness since they typically display numerous camera videos on a grid of 2D displays. In contrast, the Video Flashlight system exploits the real-time rendering capabilities of current graphics hardware and renders live videos from various parts of an environment co-registered with the model. The user gets a global view of the model and is also able to visualize the dynamic videos simultaneously in the context of the model. In particular, the location of pixels and objects seen in the videos are precisely overlaid on the model while the user navigates through the model. The paper presents an overview of the system, details of the real-time rendering and demonstrates the efficacy of the augmented reality application.Item Picture Perfect RGB Rendering Using Spectral Prefiltering and Sharp Color Primaries(The Eurographics Association, 2002) Ward, Greg; Eydelberg-Vileshin, Elena; P. Debevec and S. GibsonAccurate color rendering requires the consideration of many samples over the visible spectrum, and advanced rendering tools developed by the research community offer multispectral sampling towards this goal. However, for practical reasons including efficiency, white balance, and data demands, most commercial rendering packages still employ a naive RGB model in their lighting calculations. This results in colors that are often qualitatively different from the correct ones. In this paper, we demonstrate two independent and complementary techniques for improving RGB rendering accuracy without impacting calculation time: spectral prefiltering and color space selection. Spectral prefiltering is an obvious but overlooked method of preparing input colors for a conventional RGB rendering calculation, which achieves exact results for the direct component, and very accurate results for the interreflected component when compared with full-spectral rendering. In an empirical error analysis of our method, we show how the choice of rendering color space also affects final image accuracy, independent of prefiltering. Specifically, we demonstrate the merits of a particular transform that has emerged from the color research community as the best performer in computing white point adaptation under changing illuminants: the Sharp RGB space.Item Exact From-Region Visibility Culling(The Eurographics Association, 2002) Nirenstein, S.; Blake, E.; Gain, J.; P. Debevec and S. GibsonTo pre-process a scene for the purpose of visibility culling during walkthroughs it is necessary to solve visibility from all the elements of a finite partition of viewpoint space. Many conservative and approximate solutions have been developed that solve for visibility rapidly. The idealised exact solution for general 3D scenes has often been regarded as computationally intractable. Our exact algorithm for finding the visible polygons in a scene from a region is a computationally tractable pre-process that can handle scenes of the order of millions of polygons. The essence of our idea is to represent 3-D polygons and the stabbing lines connecting them in a 5-D Euclidean space derived from Plücker space and then to perform geometric subtractions of occluded lines from the set of potential stabbing lines.We have built a query architecture around this query algorithm that allows for its practical application to large scenes. We have tested the algorithm on two different types of scene: despite a large constant computational overhead, it is highly scalable, with a time dependency close to linear in the output produced.Item Fast, Arbitrary BRDF Shading for Low-Frequency Lighting Using Spherical Harmonics(The Eurographics Association, 2002) Kautz, Jan; Sloan, Peter-Pike; Snyder, John; P. Debevec and S. GibsonReal-time shading using general (e.g., anisotropic) BRDFs has so far been limited to a few point or directional light sources. We extend such shading to smooth, area lighting using a low-order spherical harmonic basis for the lighting environment. We represent the 4D product function of BRDF times the cosine factor (dot product of the incident lighting and surface normal vectors) as a 2D table of spherical harmonic coefficients. Each table entry represents, for a single view direction, the integral of this product function times lighting on the hemisphere expressed in spherical harmonics. This reduces the shading integral to a simple dot product of 25 component vectors, easily evaluatable on PC graphics hardware. Non-trivial BRDF models require rotating the lighting coefficients to a local frame at each point on an object, currently forming the computational bottleneck. Real-time results can be achieved by fixing the view to allow dynamic lighting or vice versa. We also generalize a previous method for precomputed radiance transfer to handle general BRDF shading. This provides shadows and interreflections that respond in real-time to lighting changes on a preprocessed object of arbitrary material (BRDF) type.Item Image-based Environment Matting(The Eurographics Association, 2002) Wexler, Yonatan; Fitzgibbon, Andrew. W.; Zisserman, Andrew.; P. Debevec and S. GibsonEnvironment matting is a powerful technique for modeling the complex light-transport properties of real-world optically active elements: transparent, refractive and reflective objects. Recent research has shown how environment mattes can be computed for real objects under carefully controlled laboratory conditions. However, many objects for which environment mattes are necessary for accurate rendering cannot be placed into a calibrated lighting environment. We show in this paper that analysis of the way in which optical elements distort the appearance of their backgrounds allows the construction of environment mattes in situ without the need for specialized calibration. Specifically, given multiple images of the same element over the same background, where the element and background have relative motion, it is shown that both the background and the optical element s light-transport path can be computed. We demonstrate the technique on two different examples. In the first case, the optical element s geometry is simple, and evaluation of the realism of the output is easy. In the second, previous techniques would be difficult to apply. We show that image-based environment matting yields a realistic solution. We discuss how the stability of the solution depends on the number of images used, and how to regularize the solution where only a small number of images are available