EG UK Theory and Practice of Computer Graphics 2009
Permanent URI for this collection
Browse
Browsing EG UK Theory and Practice of Computer Graphics 2009 by Title
Now showing 1 - 20 of 33
Results Per Page
Sort Options
Item Accelerating Raycasting Utilizing Volume Segmentation of Industrial CT Data(The Eurographics Association, 2009) Frey, Steffen; Ertl, Thomas; Wen Tang and John CollomosseWe propose a flexible acceleration technique for raycasting targeted at industrial CT data and the context of material deficiency checking. Utilizing volume segmentation that is typically employed for object analysis, GPU raycasting can be accelerated significantly using a novel data structure that is integrated into the volume to improve the responsiveness for the interactive, visual inspection of high-resolution, high-precision data. Our acceleration approach is designed to cause no extra texture lookups and to produce only marginal computational and storage overhead. Despite the fact that the data structure is integrated into the volume, the graphics card's hardware can still be used for trilinear interpolation of density values without producing incorrect results. The presented method can further easily be utilized in combination with out-of-core approaches and distributed volume rendering schemes.Item An Adaptive Sampling Approach to Incompressible Particle-Based Fluid(The Eurographics Association, 2009) Hong, Woosuck; House, Donald H.; Keyser, John; Wen Tang and John CollomosseWe describe an adaptive particle-based technique for simulating incompressible fluid that uses an octree structure to compute inter-particle interactions and to compute the pressure field. Our method extends the hybrid Flip technique by supporting adaptive splitting and merging of fluid particles, and adaptive spatial sampling for the reconstruction of the velocity and pressure fields. Particle splitting allows a detailed sampling of fluid momentum in regions of complex flow. Particle merging, in regions of smooth flow, reduces memory and computational overhead. The octree supporting field-based calculations is adapted to provide a fine spatial reconstruction where particles are small and a coarse reconstruction where particles are large. This scheme places computational resources where they are most needed, to handle both flow and surface complexity. Thus, incompressibility can be enforced even in very small, but highly turbulent areas. Simultaneously, the level of detail is very high in these areas, allowing the direct support of tiny splashes and small-scale surface tension effects. This produces a finely detailed and realistic representation of surface motion.Item Aesthetic-Interaction: Exploring the Importance of the Visual Aesthetic in the Creation of Engaging Photorealistic VR Environments(The Eurographics Association, 2009) Carroll, Fiona; Wen Tang and John CollomosseNearly forty years since its conception, the medium of VR is still an enigma. In many ways, it is a medium that still lacks its own uniform language. VR, and particularly photorealistic VR, is a medium that is so occupied in developing its technological capabilities that its other hidden strengths have been neglected. The research presented in this paper is therefore interested in building a more holistic understanding of the "language" of VR, and aims to look beyond the technological in order to explore the creative and experiential side of VR. The goal of the paper is to cross fertilise the fields of HCI, photorealistic virtual reality and visual aesthetics. In it, the author focuses on the design of an aesthetic-interaction and in doing so, implements a comparative study to explore how the strategic patterning of the aesthetic elements (particularly colour) within the photorealistic VR environment can ensure a more engaging VR experience. In conclusion, the author claims that the next generation design of photorealistic VR experiences should consider a balanced combination of both science and art. It highlights that aesthetics can play as important a role as the development of new and more efficient technologies in getting to the heart of the "engaging" photorealistic VR experience.Item An Aliasing Theory of Shadow Mapping(The Eurographics Association, 2009) Zhang, Fan; Zhao, Chong; Sun, Hanqiu; Wen Tang and John CollomosseShadow mapping is a popular image-based technique for real-time shadow rendering. Although numerous improvements have been made to help anti-aliasing in shadow mapping, there is a lack of mathematical tools that allow us to quantitatively analyze aliasing errors in its variants. In this paper, we establish an aliasing theory to achieve this goal. A generalized representation of aliasing errors is derived from a pure mathematical point of view. The major highlight of this representation is the ability of quantifying the aliasing error at any position for general view-light configurations. On the contrary, due to the geometric assumptions used in the computational model, previous work analyzes the aliasing only along the view direction in the simplest case where the light and view directions are orthogonal. Subsequently, as a direct application of our theory, we present a comparison of aliasing distributions in a few representative variants of perspective shadow maps. We believe that these theoretical results are useful to better understand shadow mapping, and thus inspire people to develop novel techniques in this area.Item Automatically Generating Virtual Humans using Evolutionary Algorithms(The Eurographics Association, 2009) Albin-Clark, Adrian; Howard, Toby; Wen Tang and John CollomosseAbstract Virtual Humans are used in many applications either as an embodiment of a real person (an "avatar"), or under the control of a computer program (an "agent" or "non-player character"). The automatic generation of Virtual Humans is a challenging problem if they are to look both plausible and unique within a population. We present an approach which exploits the power of Evolutionary Algorithms (EAs), and provide illustrative examples of how our methods may be realised within the context of surface-based model geometry.Item Calibrating a COTS Monitor to DICOM Standard(The Eurographics Association, 2009) Grimstead, Ian J.; Avis, Nick J.; Wen Tang and John CollomosseAbstract We present a method for calibrating a commodity, off-the-shelf (COTS) monitor (costing in the region of £200) to produce a greyscale image approximately calibrated to the DICOM standard, rather than require a 10-bit radiology monitor (costing in the region of £10,000). We use the concept of PseudoGrey to extend the available shades of grey from 256 to 5,800, which is in excess of a 12-bit greyscale. The chromaticity of the resulting greyscale is analysed to verify that the colour introduced does not unduly detract from a pure greyscale image. The behaviour of low intensity levels in the COTS monitor is also analysed, showing that a naive approach to estimating luminance from individual passes through the red, green and blue components is insufficient to produce an accurate intensity range. The results show that we can achieve a basic DICOM calibration (with FIT and LUM tests), but we have yet to test for further variability (such as off-axis deterioration in brightness or inconsistent luminance across a display). As well as displaying medical images, this approach may be of use in other areas requiring a high dynamic range, such as thermal imagery or images taken through multiple alternative exposures.Item Coastal Shelf Visualization using VTK and OpenDX of Hydro-Informatic Numerical Models(The Eurographics Association, 2009) George, Richard L. S. F.; Roberts, Jonathan C.; Wen Tang and John CollomosseScientific visualization has an important role in climate change analysis, especially by creating dynamic interactive environments. The earth sciences, such as oceanography, utilize many numerical models; experts wish to quickly try out different scenarios and explore various possible outcomes. However, many experts still rely upon two-dimensional slices and non-interactive computer graphics to perform their analysis. Subsequently, there is a strong argument to understand current practices, learn from other disciplines and develop appropriate interactive methods to visualize and explore complex hydro-informatics. First, this paper presents a discussion of two prototype visualization tools that were developed to represent coastal shelf tidal flow data where the data was simulated using TELEMAC-2D numerical model datasets. Prototype 1 was developed using OpenDX and Prototype 2 with VTK. Second, various strengths and and weaknesses of each system are discussed, especially in their use for exploratory oceanographic visualization. Finally, practical solutions of how both were implemented are described. Consequently, this paper provides practical and scientific guidelines that other oceanographic developers can utilize for future work.Item Design and Evaluation of a Hardware Accelerated Ray Tracing Data Structure(The Eurographics Association, 2009) Steffen, Michael; Zambreno, Joseph; Wen Tang and John CollomosseThe increase in graphics card performance and processor core count has allowed significant performance accel- eration for ray tracing applications. Future graphics architectures are expected to continue increasing the number of processor cores, further improving performance by exploiting data parallelism. However, current ray tracing implementations are based on recursive searches which involve multiple memory reads. Consequently, software implementations are used without any dedicated hardware acceleration. In this paper, we introduce a ray trac- ing method designed around hierarchical space subdivision schemes that reduces memory operations. In addition, parts of this traversal method can be performed in fixed hardware running in parallel with programmable graphics processors. We used a custom performance simulator that uses our traversal method, based on a kd-tree, to compare against a conventional kd-tree. The system memory requirements and system memory reads are analyzed in detail for both acceleration structures. We simulated six benchmark scenes and show a reduction in the number of memory reads of up to 70 percent compared to current recursive methods for scenes with over 100,000 polygons.Item Developing an Application to Provide Interactive Threedimensional Visualisation of Bone Fractures(The Eurographics Association, 2009) Wilson, Arline F.; Musgrove, Peter B.; Buckley, Kevan A.; Pearce, Gill; Geoghegan, John; Wen Tang and John CollomosseThe main research question for this work is: What are the factors involved in developing a three-dimensional (3D) interactive model of a bone fracture and how best may these be addressed? This paper presents work in progress of the development of an Interactive Bone Fragment Manipulation tool which could be used as an aid for pre-operative planning of surgery on complex fractures.Item Diffusion and Fractional Diffusion Based Image Processing(The Eurographics Association, 2009) Blackledge, Jonathan Michael; Wen Tang and John CollomosseWe consider the background to describing strong scattering in terms of diffusive processes based on the diffusion equation. Intermediate strength scattering is then considered in terms of a fractional diffusion equation which is studied using results from fractional calculus. This approach is justified in terms of the generalization of a random walk model with no statistical bias in the phase to a random walk that has a phase bias and is thus, only 'partially' or 'fractionally' diffusive. A Green's function solution to the fractional diffusion equation is studied and a result derived that provides a model for an incoherent image generated by light scattering from a tenuous random medium. Applications include image enhancement of star fields and other cosmological bodies imaged through interstellar dust clouds. An example of this application is given.Item Discrete Element Modelling Using a Parallelised Physics Engine(The Eurographics Association, 2009) Longshaw, Stephen M.; Turner, Martin J.; Finch, Emma; Gawthorpe, Robert; Wen Tang and John CollomosseDiscrete Element Modelling (DEM) is a technique used widely throughout science and engineering. It offers a convenient method with which to numerically simulate a system prone to developing discontinuities within its structure. Often the technique gets overlooked as designing and implementing a model on a scale large enough to be worthwhile can be both time consuming and require specialist programming skills. Currently there are a few notable efforts to produce homogenised software to allow researchers to quickly design and run DEMs with in excess of 1 million elements. However, these applications, while open source, are still complex in nature and require significant input from their original publishers in order for them to include new features as a researcher needs them. Recently software libraries notably from the computer gaming and graphics industries, known as physics engines, have emerged. These are designed specifically to calculate the physical movement and interaction of a system of independent rigid bodies. They provide conceptual equivalents of real world constructions with which an approximation of a realistic scenario can be quickly built. This paper presents a method to utilise the most notable of these engines, NVIDIAs PhysX, to produce a parallelised geological DEM capable of supporting in excess of a million elements.Item Distance Based Feature Detection on 3D Point Sets(The Eurographics Association, 2009) Ramli, Ahmad; Ivrissimtzis, Ioannis; Wen Tang and John CollomosseWe propose a distance based algorithm for implicit feature detection on 3D point sets. Instead of directly determining whether a point belongs to a feature of the 3D point set or not, we first compute the distance between the point and its nearest feature. The obtained distance function is filtered, removing noise and outliers, and the features of the point set are computed as the zero set of the filtered function. Initial tests show that the proposed method is robust and can deal with amount of noise usually expected in a point set.Item An Edge-based Approach to Adaptively Refining a Mesh for Cloth Deformation(The Eurographics Association, 2009) Simnett, Timothy J. R.; Laycock, Stephen D.; Day, Andy M.; Wen Tang and John CollomosseSimulating cloth in real-time is a challenging endeavour due to the number of triangles necessary to depict the potentially frequent changes in curvature, in combination with the physics calculations which model the deformations. To alleviate the costs, adaptive methods are often employed to refine the mesh in areas of high curvature, however, they do not often consider a decimation or coarsening of areas which were refined previously. In addition to this, the triangulation and consistency checks required to maintain a continuous mesh can be prohibitively time consuming when attempting to simulate larger pieces of cloth. In this paper we present an efficient edge-based approach to adaptively refine and coarsen a dynamic mesh, with the aim to exploit the varied nature of cloth by trading the level of detail in flat parts for increased detail in the curved regions of the cloth. An edge-based approach enables fast incremental refinement and coarsening, whereby only two triangles need updating on each split or join of an edge. The criteria for refinement includes curvature, edge length and edge collisions. Simple collision detection is performed allowing interactions between the cloth and the other objects in the environment.Item Facial Expression Transferring with a Deformable Model(The Eurographics Association, 2009) Xiang, Guofu; Ju, Xiangyang; Holt, Patrik O'B.; Shang, Lin; Wen Tang and John CollomosseThis paper presents an automated approach to transferring facial expressions from a generic facial model onto various individual facial models without requiring any prior correspondences and manual interventions during the transferring process. This approach automatically detects the corresponding feature landmarks between models, and establishes the dense correspondences by means of an elastic energy-based deformable modelling approach. The deformed model, obtained through the deformation process, maintains the same topology as the generic model and the same shape as the individual one. After establishing the dense correspondences, we first transfer the facial expressions onto the deformed model by a deformation transfer technique, and then obtain the final expression models of individual models by interpolating the expression displacements on the deformed model. The results show that our approach is able to produce convincing results on landmark detection, correspondence establishment and expression transferring.Item Fast and Accurate Finite Element Method for Deformation Animations(The Eurographics Association, 2009) Tang, Wen; Wan, Tao Ruan; Niquin, Ceddric; Schildknecht, Alexandre; Wen Tang and John CollomosseWe present a matrix clustering method for speeding up finite element computations for non-rigid object animation. The method increases the efficiency of computing deformation dynamics through a compression scheme that decomposes the large force-displacement matrix into clusters of smaller matrices in order to facilitate the run-time computations of linear finite element based deformations. The deformation results are compared with the results produced by using modal analysis method and the standard linear finite element algorithm. We demonstrate that the proposed method is stable with comparative computational speed to modal analysis method. A hierarchical skeleton-based system is also implemented to add constraints to material nodes. Thus, real-time deformations can be directed by motion captured data sets or key-framed animations.Item A Framework for Physically Based Forest Fire Animation(The Eurographics Association, 2009) Gundersen, Odd Erik; Skjermo, Jo; Wen Tang and John CollomosseAbstract In this paper, we propose a conceptual framework for animating physically based forest fires. Animating forest fire is a computationally demanding task as trees are intricate structures and fire is a highly complex process. The framework is divided into three conceptual levels, which are a large scale forest fire simulation, a small scale tree fire simulation, and an intermediate level connecting the two. Problems with and possible solutions to all three levels are discussed. Based on this discussion, a complete framework is proposed.Item A Haptic System for Drilling into Volume Data with Polygonal Tools(The Eurographics Association, 2009) Liu, Yu; Laycock, Stephen D.; Wen Tang and John CollomosseWith the developments of volume visualization technology for complex data sets comes new challenges in terms of user interaction and information extraction. Volume haptics has proven itself to be an effective way of extracting valuable information by providing an extra sense from which to perceive three dimensional data. This paper presents a haptic system for using arbitrary polygonal tools for drilling into volume data. By using this system, users can select from a variety of virtual tools to gain continuous and smooth force feedback during the drilling of volumetric data. As the user manipulates the haptic device the tool typically only moves a small amount. By considering the locations of the data points, that are modified when drilling, a relatively small number of voxels are determined each frame which must be recomputed by a Marching Cubes algorithm.Item Hardware Accelerated Shaders Using FPGAs(The Eurographics Association, 2009) Goddard, Luke; Stephenson, Ian; Wen Tang and John CollomosseWe demonstrate that Field Programable Gate Arrays (FPGAs) can be used to accelerate shading of surfaces for production quality rendering (a task standard interactive graphics hardware is generally ill-suited to) by allowing circuits to be dynamically created at run-time on standard commercial logic boards. By compiling shaders to hardware descriptions, they can be executed on FPGA with the performance of hardware without sacrificing the flexibility of software implementations. The resulting circuits are fully pipelined, and for circuits within the capacity of the FPGA can shade MicroPolygons at a fixed rate independent of shader complexity.Item Higher Dimensional Vector Field Visualization: A Survey(The Eurographics Association, 2009) Peng, Zhenmin; Laramee, Robert S.; Wen Tang and John CollomosseVector field visualization research has evolved very rapidly over the last two decades. There is growing consensus amongst the research community that the challenge of two-dimensional vector field visualization is virtually solved as a result of the tremendous amount of effort put into this problem. Two-dimensional flow, both steady and unsteady can be visualized in real-time, with complete coverage of the flow without much difficulty. However, the same cannot be said of flow in higher-spatial dimensions, e.g. surfaces in 3D (2.5D) or volumetric flow (3D). We present a survey of higher-spatial dimensional flow visualization techniques based on the presumption that little work remains for the case of two-dimensional flow whereas many challenges still remain for the cases of 2.5D and 3D domains. This survey provides the most up-to-date review of the state-of-the-art of flow visualization in higher dimensions. The reader is provided with a high-level overview of research in the field highlighting both solved and unsolved problems in this rapidly evolving direction of research.Item An Improved Precise Multi-contact Haptic Visualization(The Eurographics Association, 2009) Flasar, Jan; Kovalcík, Vít; Sochor, Jirí; Wen Tang and John CollomosseWe present an improved multi-contact haptic visualization method based on the Spatialized Normal Cone Hier- archies (SNCH). Though this approach is not entirely new, we have implemented several improvements in order to significantly increase precision and robustness over the previous method. As a consequence we are able to simulate much harder surfaces and give users the chance to feel smaller features on the surface compared to the original approach. This was achieved mostly by using more precise triangle to triangle distance calculations and a different triangle visibility algorithm. As these computations are expensive, we have also developed a new technique to reduce the number of calculations required. Currently, our algorithm is capable of visualizing haptic interactions between two 3D models consisting of tens of thousands of triangles. The simulation is performed in real-time and is seamlessly integrated into a virtual-reality component-based system named VRECKO.