Browsing by Author "Dulecha, Tinsae Gebrechristos"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Crack Detection in Single- and Multi-Light Images of Painted Surfaces using Convolutional Neural Networks(The Eurographics Association, 2019) Dulecha, Tinsae Gebrechristos; Giachetti, Andrea; Pintus, Ruggero; Ciortan, Irina; Villanueva, Alberto Jaspe; Gobbetti, Enrico; Rizvic, Selma and Rodriguez Echavarria, KarinaCracks represent an imminent danger for painted surfaces that needs to be alerted before degenerating into more severe aging effects, such as color loss. Automatic detection of cracks from painted surfaces' images would be therefore extremely useful for art conservators; however, classical image processing solutions are not effective to detect them, distinguish them from other lines or surface characteristics. A possible solution to improve the quality of crack detection exploits Multi-Light Image Collections (MLIC), that are often acquired in the Cultural Heritage domain thanks to the diffusion of the Reflectance Transformation Imaging (RTI) technique, allowing a low cost and rich digitization of artworks' surfaces. In this paper, we propose a pipeline for the detection of crack on egg-tempera paintings from multi-light image acquisitions and that can be used as well on single images. The method is based on single or multi-light edge detection and on a custom Convolutional Neural Network able to classify image patches around edge points as crack or non-crack, trained on RTI data. The pipeline is able to classify regions with cracks with good accuracy when applied on MLIC. Used on single images, it can give still reasonable results. The analysis of the performances for different lighting directions also reveals optimal lighting directions.Item Disk-NeuralRTI: Optimized NeuralRTI Relighting through Knowledge Distillation(The Eurographics Association, 2024) Dulecha, Tinsae Gebrechristos; Righetto, Leonardo; Pintus, Ruggero; Gobbetti, Enrico; Giachetti, Andrea; Caputo, Ariel; Garro, Valeria; Giachetti, Andrea; Castellani, Umberto; Dulecha, Tinsae GebrechristosRelightable images created from Multi-Light Image Collections (MLICs) are among the most employed models for interactive object exploration in cultural heritage (CH). In recent years, neural representations have been shown to produce higherquality images at similar storage costs to the more classic analytical models such as Polynomial Texture Maps (PTM) or Hemispherical Harmonics (HSH). However, the Neural RTI models proposed in the literature perform the image relighting with decoder networks with a high number of parameters, making decoding slower than for classical methods. Despite recent efforts targeting model reduction and multi-resolution adaptive rendering, exploring high-resolution images, especially on high-pixelcount displays, still requires significant resources and is only achievable through progressive rendering in typical setups. In this work, we show how, by using knowledge distillation from an original (teacher) Neural RTI network, it is possible to create a more efficient RTI decoder (student network). We evaluated the performance of the network compression approach on existing RTI relighting benchmarks, including both synthetic and real datasets, and on novel acquisitions of high-resolution images. Experimental results show that we can keep the student prediction close to the teacher with up to 80% parameter reduction and almost ten times faster rendering when embedded in an online viewer.Item MLIC-Synthetizer: a Synthetic Multi-Light Image Collection Generator(The Eurographics Association, 2019) Dulecha, Tinsae Gebrechristos; Dall'Alba, Andrea; Giachetti, Andrea; Agus, Marco and Corsini, Massimiliano and Pintus, RuggeroWe present MLIC-Synthetizer, a Blender plugin specifically designed for the generation of a syntethic Multi-Light Image Collection using physically-based rendering. This tool makes easy to generate large amount of test data that can be useful for Photometric Stereo algorithms evaluation, validation of Reflectance Transformation Imaging calibration and processing method, relighting methods and more. Multi-pass rendering allows the generation of images with associated shadows and specularity ground truth maps, ground truth normals and material segmentation masks. Furthermore loops on material parameters allows the automatic generation of datasets with pre-defined material parameters ranges that can be used to train robust learning-based algorithms for 3D reconstruction, relight and material segmentation.Item Smart Tools and Applications in Graphics - Eurographics Italian Chapter Conference: Frontmatter(The Eurographics Association, 2024) Caputo, Ariel; Garro, Valeria; Giachetti, Andrea; Castellani, Umberto; Dulecha, Tinsae Gebrechristos; Caputo, Ariel; Garro, Valeria; Giachetti, Andrea; Castellani, Umberto; Dulecha, Tinsae GebrechristosItem SynthPS: a Benchmark for Evaluation of Photometric Stereo Algorithms for Cultural Heritage Applications(The Eurographics Association, 2020) Dulecha, Tinsae Gebrechristos; Pintus, Ruggero; Gobbetti, Enrico; Giachetti, Andrea; Spagnuolo, Michela and Melero, Francisco JavierPhotometric Stereo (PS) is a technique for estimating surface normals from a collection of images captured from a fixed viewpoint and with variable lighting. Over the years, several methods have been proposed for the task, trying to cope with different materials, lights, and camera calibration issues. An accurate evaluation and selection of the best PS methods for different materials and acquisition setups is a fundamental step for the accurate quantitative reconstruction of objects' shapes. In particular, it would boost quantitative reconstruction in the Cultural Heritage domain, where a large amount of Multi-Light Image Collections are captured with light domes or handheld Reflectance Transformation Imaging protocols. However, the lack of benchmarks specifically designed for this goal makes it difficult to compare the available methods and choose the most suitable technique for practical applications. An ideal benchmark should enable the evaluation of the quality of the reconstructed normals on the kind of surfaces typically captured in real-world applications, possibly evaluating performance variability as a function of material properties, light distribution, and image quality. The evaluation should not depend on light and camera calibration issues. In this paper, we propose a benchmark of this kind, SynthPS, which includes synthetic, physically-based renderings of Cultural Heritage object models with different assigned materials. SynthPS allowed us to evaluate the performance of classical, robust and learning-based Photometric Stereo approaches on different materials with different light distributions, also analyzing their robustness against errors typically arising in practical acquisition settings, including robustness against gamma correction and light calibration errors.