GCH 2023 - Eurographics Workshop on Graphics and Cultural Heritage
Permanent URI for this collection
Browse
Browsing GCH 2023 - Eurographics Workshop on Graphics and Cultural Heritage by Subject "Applied computing → Archaeology"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item One-to-many Reconstruction of 3D Geometry of cultural Artifacts using a synthetically trained Generative Model(The Eurographics Association, 2023) Pöllabauer, Thomas; Kühn, Julius; Li, Jiayi; Kuijper, Arjan; Bucciero, Alberto; Fanini, Bruno; Graf, Holger; Pescarin, Sofia; Rizvic, SelmaEstimating the 3D shape of an object using a single image is a difficult problem. Modern approaches achieve good results for general objects, based on real photographs, but worse results on less expressive representations such as historic sketches. Our automated approach generates a variety of detailed 3D representation from a single sketch, depicting a medieval statue, and can be guided by multi-modal inputs, such as text prompts. It relies solely on synthetic data for training, making it adoptable even in cases of only small numbers of training examples. Our solution allows domain experts such as a curators to interactively reconstruct potential appearances of lost artifacts.Item R-CNN based PolygonalWedge Detection Learned from Annotated 3D Renderings and Mapped Photographs of Open Data Cuneiform Tablets(The Eurographics Association, 2023) Stötzner, Ernst; Homburg, Timo; Bullenkamp, Jan Philipp; Mara, Hubert; Bucciero, Alberto; Fanini, Bruno; Graf, Holger; Pescarin, Sofia; Rizvic, SelmaMotivated by the demands of Digital Assyriology and the challenges of detecting cuneiform signs, we propose a new approach using R-CNN architecture to classify and localize wedges. We utilize the 3D models of 1977 cuneiform tablets from the Frau Professor Hilprecht Collection available as pen data. About 500 of these tablets have a transcription available in the Cuneiform Digital Library Initiative (CDLI) database. We annotated 21.000 cuneiform signs as well as 4.700 wedges resulting in the new open data Mainz Cuneiform Benchmark Dataset (MaiCuBeDa), including metadata, cropped signs, and partially wedges. The latter is also a good basis for manual paleography. Our inputs are MSII renderings computed using the GigaMesh Software Framework and photographs having the annotations automatically transferred from the renderings. Our approach consists of a pipeline with two components: a sign detector and a wedge detector. The sign detector uses a RepPoints model with a ResNet18 backbone to locate individual cuneiform characters in the tablet segment image. The signs are then cropped based on the sign locations and fed into the wedge detector. The wedge detector is based on the idea of Point RCNN approach. It uses a Feature Pyramid Network (FPN) and RoI Align to predict the positions and classes of the wedges. The method is evaluated using different hyperparameters, and post-processing techniques such as Non-Maximum Suppression (NMS) are applied for refinement. The proposed method shows promising results in cuneiform wedge detection. Our detector was evaluated using the Gottstein system and with the PaleoCodage encoding. Our results show that the sign detector performs better when trained on 3D renderings than photographs. We showed that detectors trained on photographs are usually less accurate. The accuracy on photographs improves when trained, including 3D renderings. Overall, our pipeline achieves decent results, with some limitations due to the relatively small amount of data. However, even small amounts of high-quality renderings of 3D datasets with expert annotations dramatically improved sign detection.Item Towards Crowd-Sourced Collaborative Fragment Matching(The Eurographics Association, 2023) Houska, Peter; Kloiber, Simon; Masur, Alessandra; Lengauer, Stefan; Karl, Stephan; Preiner, Reinhold; Bucciero, Alberto; Fanini, Bruno; Graf, Holger; Pescarin, Sofia; Rizvic, SelmaMany artifacts of our archaeological heritage are preserved only in fragments. The reassembly of these parts to their original form is therefore an essential task for archaeologists. Our project aims at incorporating the intellect of many participants from the broad public in the solution of this complex task. To this end, we develop a web-based 3D environment, in which users can interactively and collaboratively reassemble virtual fragments of real-world artifacts, supported by computer-aided methods. Our primary research focus lies on identifying how to best design and setup such a system in order to maximize the collaboration efficiency. By participating in this open reassembly process, users can gain valuable insight into the archaeological task, thus raising awareness for our common cultural heritage in a multitude of people.