Eurographics Digital Library

This is the DSpace 7 platform of the Eurographics Digital Library.
  • The contents of the Eurographics Digital Library Archive are freely accessible. Only access to the full-text documents of the journal Computer Graphics Forum (joint property of Wiley and Eurographics) is restricted to Eurographics members, people from institutions who have an Institutional Membership at Eurographics, or users of the TIB Hannover. On the item pages you will find so-called purchase links to the TIB Hannover.
  • As a Eurographics member, you can log in with your email address and password from https://services.eg.org. If you are part of an institutional member and you are on a computer with a Eurographics registered IP domain, you can proceed immediately.
  • From 2022, all new releases published by Eurographics will be licensed under Creative Commons. Publishing with Eurographics is Plan-S compliant. Please visit Eurographics Licensing and Open Access Policy for more details.
 

Recent Submissions

Item
Constrained Spectral Uplifting for HDR Environment Maps
(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Tódová, L.; Wilkie, A.
Spectral representation of assets is an important precondition for achieving physical realism in rendering. However, defining assets by their spectral distribution is complicated and tedious. Therefore, it has become general practice to create RGB assets and convert them into their spectral counterparts prior to rendering. This process is called . While a multitude of techniques focusing on reflectance uplifting exist, the current state of the art of uplifting emission for image‐based lighting consists of simply scaling reflectance uplifts. Although this is usable insofar as the obtained overall scene appearance is not unrealistic, the generated emission spectra are only metamers of the original illumination. This, in turn, can cause deviations from the expected appearance even if the rest of the scene corresponds to real‐world data. In a recent publication, we proposed a method capable of uplifting HDR environment maps based on spectral measurements of light sources similar to those present in the maps. To identify the illuminants, we employ an extensive set of emission measurements, and we combine the results with an existing reflectance uplifting method. In addition, we address the problem of environment map capture for the purposes of a spectral rendering pipeline, for which we propose a novel solution. We further extend this work with a detailed evaluation of the method, both in terms of improved colour error and performance.
Item
Erratum to “Rational Bézier Guarding”
(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025)
Item
THGS: Lifelike Talking Human Avatar Synthesis From Monocular Video Via 3D Gaussian Splatting
(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Chen, Chuang; Yu, Lingyun; Yang, Quanwei; Zheng, Aihua; Xie, Hongtao
Despite the remarkable progress in 3D talking head generation, directly generating 3D talking human avatars still suffers from rigid facial expressions, distorted hand textures and out‐of‐sync lip movements. In this paper, we extend speaker‐specific talking head generation task to and propose a novel pipeline, , that animates lifelike Talking Human avatars using 3D Gaussian Splatting (3DGS). Given speech audio, expression and body poses as input, effectively overcomes the limitations of 3DGS human re‐construction methods in capturing expressive dynamics, such as , from a short monocular video. Firstly, we introduce a simple yet effective for facial dynamics re‐construction, where subtle facial dynamics can be generated by linearly combining the static head model and expression blendshapes. Secondly, a is proposed for lip‐synced mouth movement animation, building connections between speech audio and mouth Gaussian movements. Thirdly, we employ a to optimize these parameters on the fly, which aligns hand movements and expressions better with video input. Experimental results demonstrate that can achieve high‐fidelity 3D talking human avatar animation at 150+ fps on a web‐based rendering system, improving the requirements of real‐time applications. Our project page is at .
Item
The State of the Art in User‐Adaptive Visualizations
(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Yanez, Fernando; Conati, Cristina; Ottley, Alvitta; Nobre, Carolina
Research shows that user traits can modulate the use of visualization systems and have a measurable influence on users' accuracy, speed, and attention when performing visual analysis. This highlights the importance of user‐adaptive visualization that can modify themselves to the characteristics and preferences of the user. However, there are very few such visualization systems, as creating them requires broad knowledge from various sub‐domains of the visualization community. A user‐adaptive system must consider which user traits they adapt to, their adaptation logic and the types of interventions they support. In this STAR, we survey a broad space of existing literature and consolidate them to structure the process of creating user‐adaptive visualizations into five components: Capture Ⓐ from the user and any relevant peripheral information. Perform computational Ⓑ with this input to construct a Ⓒ . Employ Ⓓ logic to identify when and how to introduce Ⓔ . Our novel taxonomy provides a road map for work in this area, describing the rich space of current approaches and highlighting open areas for future work.
Item
Natural Language Generation for Visualizations: State of the Art, Challenges and Future Directions
(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Hoque, E.; Islam, M. Saidul
Natural language and visualization are two complementary modalities of human communication that play a crucial role in conveying information effectively. While visualizations help people discover trends, patterns and anomalies in data, natural language descriptions help explain these insights. Thus, combining text with visualizations is a prevalent technique for effectively delivering the core message of the data. Given the rise of natural language generation (NLG), there is a growing interest in automatically creating natural language descriptions for visualizations, which can be used as chart captions, answering questions about charts or telling data‐driven stories. In this survey, we systematically review the state of the art on NLG for visualizations and introduce a taxonomy of the problem. The NLG tasks fall within the domain of natural language interfaces (NLIs) for visualization, an area that has garnered significant attention from both the research community and industry. To narrow down the scope of the survey, we primarily concentrate on the research works that focus on text generation for visualizations. To characterize the NLG problem and the design space of proposed solutions, we pose five Wh‐questions, why and how NLG tasks are performed for visualizations, what the task inputs and outputs are, as well as where and when the generated texts are integrated with visualizations. We categorize the solutions used in the surveyed papers based on these ‘five Wh‐questions’. Finally, we discuss the key challenges and potential avenues for future research in this domain.