Volume 44 (2025)
Permanent URI for this community
Browse
Browsing Volume 44 (2025) by Title
Now showing 1 - 20 of 36
Results Per Page
Sort Options
Item Automatic Inbetweening for Stroke‐Based Painterly Animation(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Barroso, Nicolas; Fondevilla, Amélie; Vanderhaeghe, DavidPainterly 2D animation, like the paint‐on‐glass technique, is a tedious task performed by skilled artists, primarily using traditional manual methods. Although CG tools can simplify the creation process, previous works often focus on temporal coherence, which typically results in the loss of the handmade look and feel. In contrast to cartoon animation, where regions are typically filled with smooth gradients, stroke‐based stylized 2D animation requires careful consideration of how shapes are filled, as each stroke may be perceived individually. We propose a method to generate intermediate frames using example keyframes and a motion description. This method allows artists to create only one image for every five to 10 output images in the animation, while the automatically generated intermediate frames provide plausible inbetween frames.Item BI‐LAVA: Biocuration With Hierarchical Image Labelling Through Active Learning and Visual Analytics(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Trelles, Juan; Wentzel, Andrew; Berrios, William; Shatkay, Hagit; Marai, G. ElisabetaIn the biomedical domain, taxonomies organize the acquisition modalities of scientific images in hierarchical structures. Such taxonomies leverage large sets of correct image labels and provide essential information about the importance of a scientific publication, which could then be used in biocuration tasks. However, the hierarchical nature of the labels, the overhead of processing images, the absence or incompleteness of labelled data and the expertise required to label this type of data impede the creation of useful datasets for biocuration. From a multi‐year collaboration with biocurators and text‐mining researchers, we derive an iterative visual analytics and active learning (AL) strategy to address these challenges. We implement this strategy in a system called BI‐LAVA—Biocuration with Hierarchical Image Labelling through Active Learning and Visual Analytics. BI‐LAVA leverages a small set of image labels, a hierarchical set of image classifiers and AL to help model builders deal with incomplete ground‐truth labels, target a hierarchical taxonomy of image modalities and classify a large pool of unlabelled images. BI‐LAVA's front end uses custom encodings to represent data distributions, taxonomies, image projections and neighbourhoods of image thumbnails, which help model builders explore an unfamiliar image dataset and taxonomy and correct and generate labels. An evaluation with machine learning practitioners shows that our mixed human–machine approach successfully supports domain experts in understanding the characteristics of classes within the taxonomy, as well as validating and improving data quality in labelled and unlabelled collections.Item ConAn: Measuring and Evaluating User Confidence in Visual Data Analysis Under Uncertainty(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Musleh, M.; Ceneda, D.; Ehlers, H.; Raidou, R. G.User confidence plays an important role in guided visual data analysis scenarios, especially when uncertainty is involved in the analytical process. However, measuring confidence in practical scenarios remains an open challenge, as previous work relies primarily on self‐reporting methods. In this work, we propose a quantitative approach to measure user confidence—as opposed to trust—in an analytical scenario. We do so by exploiting the respective user interaction provenance graph and examining the impact of guidance using a set of network metrics. We assess the usefulness of our proposed metrics through a user study that correlates results obtained from self‐reported confidence assessments and our metrics—both with and without guidance. The results suggest that our metrics improve the evaluation of user confidence compared to available approaches. In particular, we found a correlation between self‐reported confidence and some of the proposed provenance network metrics. The quantitative results, though, do not show a statistically significant impact of the guidance on user confidence. An additional descriptive analysis suggests that guidance could impact users' confidence and that the qualitative analysis of the provenance network topology can provide a comprehensive view of changes in user confidence. Our results indicate that our proposed metrics and the provenance network graph representation support the evaluation of user confidence and, subsequently, the effective development of guidance in VA.Item Conditional Font Generation With Content Pre‐Train and Style Filter(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Hong, Yang; Li, Yinfei; Qiao, Xiaojun; Zhang, JunsongAutomatic font generation aims to streamline the design process by creating new fonts with minimal style references. This technology significantly reduces the manual labour and costs associated with traditional font design. Image‐to‐image translation has been the dominant approach, transforming font images from a source style to a target style using a few reference images. However, this framework struggles to fully decouple content from style, particularly when dealing with significant style shifts. Despite these limitations, image‐to‐image translation remains prevalent due to two main challenges faced by conditional generative models: (1) inability to handle unseen characters and (2) difficulty in providing precise content representations equivalent to the source font. Our approach tackles these issues by leveraging recent advancements in Chinese character representation research to pre‐train a robust content representation model. This model not only handles unseen characters but also generalizes to non‐existent ones, a capability absent in traditional image‐to‐image translation. We further propose a Transformer‐based Style Filter that not only accurately captures stylistic features from reference images but also handles any combination of them, fostering greater convenience for practical automated font generation applications. Additionally, we incorporate content loss with commonly used pixel‐ and perceptual‐level losses to refine the generated results from a comprehensive perspective. Extensive experiments validate the effectiveness of our method, particularly its ability to handle unseen characters, demonstrating significant performance gains over existing state‐of‐the‐art methods.Item Constrained Spectral Uplifting for HDR Environment Maps(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Tódová, L.; Wilkie, A.Spectral representation of assets is an important precondition for achieving physical realism in rendering. However, defining assets by their spectral distribution is complicated and tedious. Therefore, it has become general practice to create RGB assets and convert them into their spectral counterparts prior to rendering. This process is called . While a multitude of techniques focusing on reflectance uplifting exist, the current state of the art of uplifting emission for image‐based lighting consists of simply scaling reflectance uplifts. Although this is usable insofar as the obtained overall scene appearance is not unrealistic, the generated emission spectra are only metamers of the original illumination. This, in turn, can cause deviations from the expected appearance even if the rest of the scene corresponds to real‐world data. In a recent publication, we proposed a method capable of uplifting HDR environment maps based on spectral measurements of light sources similar to those present in the maps. To identify the illuminants, we employ an extensive set of emission measurements, and we combine the results with an existing reflectance uplifting method. In addition, we address the problem of environment map capture for the purposes of a spectral rendering pipeline, for which we propose a novel solution. We further extend this work with a detailed evaluation of the method, both in terms of improved colour error and performance.Item Continuous Toolpath Optimization for Simultaneous Four‐Axis Subtractive Manufacturing(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Zhang, Zhenmin; Shi, Zihan; Zhong, Fanchao; Zhang, Kun; Zhang, Wenjing; Guo, Jianwei; Tu, Changhe; Zhao, HaisenSimultaneous four‐axis machining involves a cutter that moves in all degrees of freedom during carving. This strategy provides higher‐quality surface finishing compared to positional machining. However, it has not been well‐studied in research. In this study, we propose the first end‐to‐end computational framework to optimize the toolpath for fabricating complex models using simultaneous four‐axis subtractive manufacturing. In our technique, we first slice the input 3D model into uniformly distributed 2D layers. For each slicing layer, we perform an accessibility analysis for each intersected contour within this layer. Then, we proceed with over‐segmentation and a bottom‐up connecting process to generate a minimal number of fabricable segments. Finally, we propose post‐processing techniques to further optimize the tool directionand the transfer path between segments. Physical experiments of nine models demonstrate our significant improvements in both fabrication quality and efficiency, compared to the positional strategy and two simultaneous tool paths generated by industry‐standard CAM systems.Item DeepFracture: A Generative Approach for Predicting Brittle Fractures with Neural Discrete Representation Learning(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Huang, Yuhang; Kanai, TakashiIn the field of brittle fracture animation, generating realistic destruction animations using physics‐based simulation methods is computationally expensive. While techniques based on Voronoi diagrams or pre‐fractured patterns are effective for real‐time applications, they fail to incorporate collision conditions when determining fractured shapes during runtime. This paper introduces a novel learning‐based approach for predicting fractured shapes based on collision dynamics at runtime. Our approach seamlessly integrates realistic brittle fracture animations with rigid body simulations, utilising boundary element method (BEM) brittle fracture simulations to generate training data. To integrate collision scenarios and fractured shapes into a deep learning framework, we introduce generative geometric segmentation, distinct from both instance and semantic segmentation, to represent 3D fragment shapes. We propose an eight‐dimensional latent code to address the challenge of optimising multiple discrete fracture pattern targets that share similar continuous collision latent codes. This code will follow a discrete normal distribution corresponding to a specific fracture pattern within our latent impulse representation design. This adaptation enables the prediction of fractured shapes using neural discrete representation learning. Our experimental results show that our approach generates considerably more detailed brittle fractures than existing techniques, while the computational time is typically reduced compared to traditional simulation methods at comparable resolutions.Item Deep‐Learning‐Based Facial Retargeting Using Local Patches(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Choi, Yeonsoo; Lee, Inyup; Cha, Sihun; Kim, Seonghyeon; Jung, Sunjin; Noh, JunyongIn the era of digital animation, the quest to produce lifelike facial animations for virtual characters has led to the development of various retargeting methods. While the retargeting facial motion between models of similar shapes has been very successful, challenges arise when the retargeting is performed on stylized or exaggerated 3D characters that deviate significantly from human facial structures. In this scenario, it is important to consider the target character's facial structure and possible range of motion to preserve the semantics assumed by the original facial motions after the retargeting. To achieve this, we propose a local patch‐based retargeting method that transfers facial animations captured in a source performance video to a target stylized 3D character. Our method consists of three modules. The Automatic Patch Extraction Module extracts local patches from the source video frame. These patches are processed through the Reenactment Module to generate correspondingly re‐enacted target local patches. The Weight Estimation Module calculates the animation parameters for the target character at every frame for the creation of a complete facial animation sequence. Extensive experiments demonstrate that our method can successfully transfer the semantic meaning of source facial expressions to stylized characters with considerable variations in facial feature proportion.Item Detecting, Interpreting and Modifying the Heterogeneous Causal Network in Multi‐Source Event Sequences(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Xu, Shaobin; Sun, MinghuiUncovering causal relations from event sequences to guide decision‐making has become an essential task across various domains. Unfortunately, this task remains a challenge because real‐world event sequences are usually collected from multiple sources. Most existing works are specifically designed for homogeneous causal analysis between events from a single source, without considering cross‐source causality. In this work, we propose a heterogeneous causal analysis algorithm to detect the heterogeneous causal network between high‐level events in multi‐source event sequences while preserving the causal semantic relationships between diverse data sources. Additionally, the flexibility of our algorithm allows to incorporate high‐level event similarity into learning model and provides a fuzzy modification mechanism. Based on the algorithm, we further propose a visual analytics framework that supports interpreting the causal network at three granularities and offers a multi‐granularity modification mechanism to incorporate user feedback efficiently. We evaluate the accuracy of our algorithm through an experimental study, illustrate the usefulness of our system through a case study, and demonstrate the efficiency of our modification mechanisms through a user study.Item Dynamic Voxel‐Based Global Illumination(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Cosin Ayerbe, Alejandro; Poulin, Pierre; Patow, GustavoGlobal illumination computation in real time has been an objective for Computer Graphics since its inception. Unfortunately, its implementation has challenged up to now the most advanced hardware and software solutions. We propose a real‐time voxel‐based global illumination solution for a single light bounce that handles static and dynamic objects with diffuse materials under a dynamic light source. The combination of ray tracing and voxelization on the GPU offers scalability and performance. Our divide‐and‐win approach, which ray traces separately static and dynamic objects, reduces the re‐computation load with updates of any number of dynamic objects. Our results demonstrate the effectiveness of our approach, allowing the real‐time display of global illumination effects, including colour bleeding and indirect shadows, for complex scenes containing millions of polygons.Item Editorial(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Alliez, Pierre; Wimmer, Michael; Westermann, RüdigerItem Efficient Environment Map Rendering Based on Decomposition(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Wu, Yu‐TingThis paper presents an efficient environment map sampling algorithm designed to render high‐quality, low‐noise images with only a few light samples, making it ideal for real‐time applications. We observe that bright pixels in the environment map produce high‐frequency shading effects, such as sharp shadows and shading, while the rest influence the overall tone of the scene. Building on this insight, our approach differs from existing techniques by categorizing the pixels in an environment map into emissive and non‐emissive regions and developing specialized algorithms tailored to the distinct properties of each region. By decomposing the environment lighting, we ensure that light sources are deposited on bright pixels, leading to more accurate shadows and specular highlights. Additionally, this strategy allows us to exploit the smoothness in the low‐frequency component by rendering a smaller image with more lights, thereby enhancing shading accuracy. Extensive experiments demonstrate that our method significantly reduces shadow artefacts and image noise compared to previous techniques, while also achieving lower numerical errors across a range of illumination types, particularly under limited sample conditions.Item Erratum to “Rational Bézier Guarding”(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025)Item Generalized Lipschitz Tracing of Implicit Surfaces(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Bán, Róbert; Valasek, GáborWe present a versatile and robust framework to render implicit surfaces defined by black‐box functions that only provide function value queries. We assume that the input function is locally Lipschitz continuous; however, we presume no prior knowledge of its Lipschitz constants. Our pre‐processing step generates a discrete acceleration structure, a Lipschitz field, that provides data to infer local and directional Lipschitz upper bounds. These bounds are used to compute safe step sizes along rays during rendering. The Lipschitz field is constructed by generating local polynomial approximations to the input function, then bounding the derivatives of the approximating polynomials. The accuracy of the approximation is controlled by the polynomial degree and the granularity of the spatial resolution used during fitting, which is independent from the resolution of the Lipschitz field. We demonstrate that our process can be implemented in a massively parallel way, enabling straightforward integration into interactive and real‐time modelling workflows. Since the construction only requires function value evaluations, the input surface may be represented either procedurally or as an arbitrarily filtered grid of function samples. We query the original implicit representation upon ray trace, as such, we preserve the geometric and topological details of the input as long as the Lipschitz field supplies conservative estimates. We demonstrate our method on both procedural and discrete implicit surfaces and compare its exact and approximate variants.Item A Generative Adversarial Network for Upsampling of Direct Volume Rendering Images(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Jin, Ge; Jung, Younhyun; Fulham, Michael; Feng, Dagan; Kim, JinmanDirect volume rendering (DVR) is an important tool for scientific and medical imaging visualization. Modern GPU acceleration has made DVR more accessible; however, the production of high‐quality rendered images with high frame rates is computationally expensive. We propose a deep learning method with a reduced computational demand. We leveraged a conditional generative adversarial network (cGAN) to upsample DVR images (a rendered scene), with a reduced sampling rate to obtain similar visual quality to that of a fully sampled method. Our dvrGAN is combined with a colour‐based loss function that is optimized for DVR images where different structures such as skin, bone, . are distinguished by assigning them distinct colours. The loss function highlights the structural differences between images, by examining pixel‐level colour, and thus helps identify, for instance, small bones in the limbs that may not be evident with reduced sampling rates. We evaluated our method in DVR of human computed tomography (CT) and CT angiography (CTA) volumes. Our method retained image quality and reduced computation time when compared to fully sampled methods and outperformed existing state‐of‐the‐art upsampling methods.Item GeoCode: Interpretable Shape Programs(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Pearl, Ofek; Lang, Itai; Hu, Yuhua; Yeh, Raymond A.; Hanocka, RanaThe task of crafting procedural programs capable of generating structurally valid 3D shapes easily and intuitively remains an elusive goal in computer vision and graphics. Within the graphics community, generating procedural 3D models has shifted to using node graph systems. They allow the artist to create complex shapes and animations through visual programming. Being a high‐level design tool, they made procedural 3D modelling more accessible. However, crafting those node graphs demands expertise and training. We present GeoCode, a novel framework designed to extend an existing node graph system and significantly lower the bar for the creation of new procedural 3D shape programs. Our approach meticulously balances expressiveness and generalization for part‐based shapes. We propose a curated set of new geometric building blocks that are expressive and reusable across domains. We showcase three innovative and expressive programs developed through our technique and geometric building blocks. Our programs enforce intricate rules, empowering users to execute intuitive high‐level parameter edits that seamlessly propagate throughout the entire shape at a lower level while maintaining its validity. To evaluate the user‐friendliness of our geometric building blocks among non‐experts, we conduct a user study that demonstrates their ease of use and highlights their applicability across diverse domains. Empirical evidence shows the superior accuracy of GeoCode in inferring and recovering 3D shapes compared to an existing competitor. Furthermore, our method demonstrates superior expressiveness compared to alternatives that utilize coarse primitives. Notably, we illustrate the ability to execute controllable local and global shape manipulations. Our code, programs, datasets and Blender add‐on are available at .Item HPSCAN: Human Perception‐Based Scattered Data Clustering(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Hartwig, S.; Onzenoodt, C. v.; Engel, D.; Hermosilla, P.; Ropinski, T.Cluster separation is a task typically tackled by widely used clustering techniques, such as k‐means or DBSCAN. However, these algorithms are based on non‐perceptual metrics, and our experiments demonstrate that their output does not reflect human cluster perception. To bridge the gap between human cluster perception and machine‐computed clusters, we propose HPSCAN, a learning strategy that operates directly on scattered data. To learn perceptual cluster separation on such data, we crowdsourced the labeling of bivariate (scatterplot) datasets to 384 human participants. We train our HPSCAN model on these human‐annotated data. Instead of rendering these data as scatterplot images, we used their and point coordinates as input to a modified PointNet++ architecture, enabling direct inference on point clouds. In this work, we provide details on how we collected our dataset, report statistics of the resulting annotations, and investigate the perceptual agreement of cluster separation for real‐world data. We also report the training and evaluation protocol for HPSCAN and introduce a novel metric, that measures the accuracy between a clustering technique and a group of human annotators. We explore predicting point‐wise human agreement to detect ambiguities. Finally, we compare our approach to 10 established clustering techniques and demonstrate that HPSCAN is capable of generalizing to unseen and out‐of‐scope data.Item A Hybrid Lagrangian–Eulerian Formulation of Thin‐Shell Fracture(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Fan, L.; Chitalu, F. M.; Komura, T.The hybrid Lagrangian/Eulerian formulation of continuum shells is highly effective for producing challenging simulations of thin materials like cloth with bending resistance and frictional contact. However, existing formulations are restricted to materials that do not undergo tearing nor fracture due to the difficulties associated with incorporating strong discontinuities of field quantities like velocity via basis enrichment while maintaining continuity or regularity. We propose an extension of this formulation to simulate dynamic tearing and fracturing of thin shells using Kirchhoff–Love continuum theory. Damage, which manifests as cracks or tears, is propagated by tracking the evolution of a time‐dependent phase‐field in the co‐dimensional manifold, where a moving least‐squares (MLS) approximation then captures the strong discontinuities of interpolated field quantities near the crack. Our approach is capable of simulating challenging scenarios of this tearing and fracture, all‐the‐while harnessing the existing benefits of the hybrid Lagrangian/Eulerian formulation to expand the domain of possible effects. The method is also amenable to user‐guided control, which serves to influence the propagation of cracks or tears such that they follow prescribed paths during simulation.Item Immersive and Interactive Learning With eDIVE: A Solution for Creating Collaborative VR Education Experiences(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025) Brůža, Vojtěch; Šašinková, Alžběta; Šašinka, Čeněk; Stachoň, Zdeněk; Kozlíková, Barbora; Chmelík, JiříVirtual reality (VR) technology has become increasingly popular in education as a tool for enhancing learning experiences and engagement. This paper addresses the lack of a suitable tool for creating multi‐user immersive educational content for virtual environments by introducing a novel solution called eDIVE. The solution is designed to facilitate the development of collaborative immersive educational VR experiences. Developed in close collaboration with psychologists and educators, it addresses specific functional needs identified by these professionals. eDIVE allows creators to extensively modify, expand or develop entirely new VR experiences. eDIVE ultimately makes collaborative VR education more accessible and inclusive for all stakeholders. Its utility is demonstrated through exemplary learning scenarios, developed in collaboration with experienced educators, and evaluated through real‐world user studies.Item Issue Information(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2025)