Browsing by Author "Cignoni, Paolo"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Computational Fabrication of Macromolecules to Enhance Perception and Understanding of Biological Mechanisms(The Eurographics Association, 2019) Alderighi, Thomas; Giorgi, Daniela; Malomo, Luigi; Cignoni, Paolo; Zoppè, Monica; Agus, Marco and Corsini, Massimiliano and Pintus, RuggeroWe propose a fabrication technique for the fast and cheap production of 3D replicas of proteins. We leverage silicone casting with rigid molds, to produce flexible models which can be safely extracted from the mold, and easily manipulated to simulate the biological interaction mechanisms between proteins. We believe that tangible models can be useful in education as well as in laboratory settings, and that they will ease the understanding of fundamental principles of macromolecular organization.Item High Dynamic Range Point Clouds for Real-Time Relighting(The Eurographics Association and John Wiley & Sons Ltd., 2019) Sabbadin, Manuele; Palma, Gianpaolo; BANTERLE, FRANCESCO; Boubekeur, Tamy; Cignoni, Paolo; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonAcquired 3D point clouds make possible quick modeling of virtual scenes from the real world.With modern 3D capture pipelines, each point sample often comes with additional attributes such as normal vector and color response. Although rendering and processing such data has been extensively studied, little attention has been devoted using the light transport hidden in the recorded per-sample color response to relight virtual objects in visual effects (VFX) look-dev or augmented reality (AR) scenarios. Typically, standard relighting environment exploits global environment maps together with a collection of local light probes to reflect the light mood of the real scene on the virtual object. We propose instead a unified spatial approximation of the radiance and visibility relationships present in the scene, in the form of a colored point cloud. To do so, our method relies on two core components: High Dynamic Range (HDR) expansion and real-time Point-Based Global Illumination (PBGI). First, since an acquired color point cloud typically comes in Low Dynamic Range (LDR) format, we boost it using a single HDR photo exemplar of the captured scene that can cover part of it. We perform this expansion efficiently by first expanding the dynamic range of a set of renderings of the point cloud and then projecting these renderings on the original cloud. At this stage, we propagate the expansion to the regions not covered by the renderings or with low-quality dynamic range by solving a Poisson system. Then, at rendering time, we use the resulting HDR point cloud to relight virtual objects, providing a diffuse model of the indirect illumination propagated by the environment. To do so, we design a PBGI algorithm that exploits the GPU's geometry shader stage as well as a new mipmapping operator, tailored for G-buffers, to achieve real-time performances. As a result, our method can effectively relight virtual objects exhibiting diffuse and glossy physically-based materials in real time. Furthermore, it accounts for the spatial embedding of the object within the 3D environment. We evaluate our approach on manufactured scenes to assess the error introduced at every step from the perfect ground truth. We also report experiments with real captured data, covering a range of capture technologies, from active scanning to multiview stereo reconstruction.Item Texture Defragmentation for Photo-Reconstructed 3D Models(The Eurographics Association and John Wiley & Sons Ltd., 2021) Maggiordomo, Andrea; Cignoni, Paolo; Tarini, Marco; Mitra, Niloy and Viola, IvanWe propose a method to improve an existing parametrization (UV-map layout) of a textured 3D model, targeted explicitly at alleviating typical defects afflicting models generated with automatic photo-reconstruction tools from real-world objects. This class of 3D data is becoming increasingly important thanks to the growing popularity of reliable, ready-to-use photogrammetry software packages. The resulting textured models are richly detailed, but their underlying parametrization typically falls short of many practical requirements, particularly exhibiting excessive fragmentation and consequent problems. Producing a completely new UV-map, with standard parametrization techniques, and then resampling a new texture image, is often neither practical nor desirable for at least two reasons: first, these models have characteristics (such as inconsistencies, high resolution) that make them unfit for automatic or manual parametrization; second, the required resampling leads to unnecessary signal degradation because this process is unaware of the original texel densities. In contrast, our method improves the existing UV-map instead of replacing it, balancing the reduction of the map fragmentation with signal degradation due to resampling, while also avoiding oversampling of the original signal. The proposed approach is fully automatic and extensively tested on a large benchmark of photo-reconstructed models; quantitative evaluation evidences a drastic and consistent improvement of the mappings.Item ViDA 3D: Towards a View-based Dataset for Aesthetic prediction on 3D models(The Eurographics Association, 2020) Angelini, Mattia; Ferrulli, Vito; Banterle, Francesco; Corsini, Massimiliano; Pascali, Maria Antonietta; Cignoni, Paolo; Giorgi, Daniela; Biasotti, Silvia and Pintus, Ruggero and Berretti, StefanoWe present the ongoing effort to build the first benchmark dataset for aestethic prediction on 3D models. The dataset is built on top of Sketchfab, a popular platform for 3D content sharing. In our dataset, the visual 3D content is aligned with aestheticsrelated metadata: each 3D model is associated with a number of snapshots taken from different camera positions, the number of times the model has been viewed in-between its upload and its retrieval, the number of likes the model got, and the tags and comments received from users. The metadata provide precious supervisory information for data-driven research on 3D visual attractiveness and preference prediction. The paper contribution is twofold. First, we introduce an interactive platform for visualizing data about Sketchfab. We report a detailed qualitative and quantitative analysis of numerical scores (views and likes collected by 3D models) and textual information (tags and comments) for different 3D object categories. The analysis of the content of Sketchfab provided us the base for selecting a reasoned subset of annotated models. The second contribution is the first version of the ViDA 3D dataset, which contains the full set of content required for data-driven approaches to 3D aesthetic analysis. While similar datasets are available for images, to our knowledge this is the first attempt to create a benchmark for aestethic prediction for 3D models. We believe our dataset can be a great resource to boost research on this hot and far-from-solved problem.Item A Visualization Tool for Scholarly Data(The Eurographics Association, 2019) Salinas, Mario; Giorgi, Daniela; Ponchio, Federico; Cignoni, Paolo; Agus, Marco and Corsini, Massimiliano and Pintus, RuggeroWe propose ReviewerNet, an online, interactive visualization system aimed to improve the reviewer selection process in the academic domain. Given a paper submitted for publication, we assume that good candidate reviewers can be chosen among the authors of a small set of pertinent papers; ReviewerNet supports the construction of such set of papers, by visualizing and exploring a literature citation network. Then, the system helps to select reviewers that are both well distributed in the scientific community and that do not have any conflict-of-interest, by visualising the careers and co-authorship relations of candidate reviewers. The system is publicly available, and is demonstrated in the field of Computer Graphics.