Browsing by Author "Agus, Marco"
Now showing 1 - 13 of 13
Results Per Page
Sort Options
Item EuroVis 2021 Short Papers: Frontmatter(The Eurographics Association, 2021) Agus, Marco; Garth, Christoph; Kerren, Andreas; Agus, Marco and Garth, Christoph and Kerren, AndreasItem EuroVis 2022 Short Papers: Frontmatter(The Eurographics Association, 2022) Agus, Marco; Aigner, Wolfgang; Hoellt, Thomas; Agus, Marco; Aigner, Wolfgang; Hoellt, ThomasItem A Framework for GPU-accelerated Exploration of Massive Time-varying Rectilinear Scalar Volumes(The Eurographics Association and John Wiley & Sons Ltd., 2019) Marton, Fabio; Agus, Marco; Gobbetti, Enrico; Gleicher, Michael and Viola, Ivan and Leitte, HeikeWe introduce a novel flexible approach to spatiotemporal exploration of rectilinear scalar volumes. Our out-of-core representation, based on per-frame levels of hierarchically tiled non-redundant 3D grids, efficiently supports spatiotemporal random access and streaming to the GPU in compressed formats. A novel low-bitrate codec able to store into fixed-size pages a variable-rate approximation based on sparse coding with learned dictionaries is exploited to meet stringent bandwidth constraint during time-critical operations, while a near-lossless representation is employed to support high-quality static frame rendering. A flexible high-speed GPU decoder and raycasting framework mixes and matches GPU kernels performing parallel object-space and image-space operations for seamless support, on fat and thin clients, of different exploration use cases, including animation and temporal browsing, dynamic exploration of single frames, and high-quality snapshots generated from near-lossless data. The quality and performance of our approach are demonstrated on large data sets with thousands of multi-billion-voxel frames.Item Frontmatter: STAG 2018: Smart Tools and Applications in computer Graphics(The Eurographics Association, 2018) Signoroni, Alberto; Livesu, Marco; Agus, Marco; Livesu, Marco and Pintore, Gianni and Signoroni, AlbertoItem A Gaze Detection System for Neuropsychiatric Disorders Remote Diagnosis Support(The Eurographics Association, 2023) Cangelosi, Antonio; Antola, Gabriele; Iacono, Alberto Lo; Santamaria, Alfonso; Clerico, Marinella; Al-Thani, Dena; Agus, Marco; Calì, Corrado; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaAccurate and early diagnosis of neuropsychiatric disorders, such as Autism Spectrum Disorders (ASD) is a significant challenge in clinical practice. This study explores the use of real-time gaze tracking as a tool for unbiased and quantitative analysis of eye gaze. The results of this study could support the diagnosis of disorders and potentially be used as a tool in the field of rehabilitation. The proposed setup consists of an RGB-D camera embedded in the latest-generation smartphones and a set of processing components for the analysis of recorded data related to patient interactivity. The proposed system is easy to use and doesn't require much knowledge or expertise. It also achieves a high level of accuracy. Because of this, it can be used remotely (telemedicine) to simplify diagnosis and rehabilitation processes. We present initial findings that show how real-time gaze tracking can be a valuable tool for doctors. It is a non-invasive device that provides unbiased quantitative data that can aid in early detection, monitoring, and treatment evaluation. This study's findings have significant implications for the advancement of ASD research. The innovative approach proposed in this study has the potential to enhance diagnostic accuracy and improve patient outcomes.Item HistoContours: a Framework for Visual Annotation of Histopathology Whole Slide Images(The Eurographics Association, 2022) Al-Thelaya, Khaled; Joad, Faaiz; Gilal, Nauman Ullah; Mifsud, William; Pintore, Giovanni; Gobbetti, Enrico; Agus, Marco; Schneider, Jens; Renata G. Raidou; Björn Sommer; Torsten W. Kuhlen; Michael Krone; Thomas Schultz; Hsiang-Yun WuWe present an end-to-end framework for histopathological analysis of whole slide images (WSIs). Our framework uses deep learning-based localization & classification of cell nuclei followed by spatial data aggregation to propagate classes of sparsely distributed nuclei across the entire slide. We use YOLO (''You Only Look Once'') for localization instead of more costly segmentation approaches and show that using HistAuGAN boosts its performance. YOLO finds bounding boxes around nuclei at good accuracy, but the classification accuracy can be improved by other methods. To this end, we extract patches around nuclei from the WSI and consider models from the SqueezeNet, ResNet, and EfficientNet families for classification. Where we do not achieve a clear separation between highest and second-highest softmax activation of the classifier, we use YOLO's output as a secondary vote. The result is a sparse annotation of the WSI, which we turn dense by using kernel density estimation. The result is a full vector of per pixel probabilities for each class of nucleus we consider. This allows us to visualize our results using both color-coding and isocontouring, reducing visual clutter. Our novel nuclei-to-tissue coupling allows histopathologists to work at both the nucleus and the tissue level, a feature appreciated by domain experts in a qualitative user study.Item Immersive Environment for Creating, Proofreading, and Exploring Skeletons of Nanometric Scale Neural Structures(The Eurographics Association, 2019) Boges, Daniya; Calì, Corrado; Magistretti, Pierre J.; Hadwiger, Markus; Sicat, Ronell; Agus, Marco; Agus, Marco and Corsini, Massimiliano and Pintus, RuggeroWe present a novel immersive environment for the exploratory analysis of nanoscale cellular reconstructions of rodent brain samples acquired through electron microscopy. The system is focused on medial axis representations (skeletons) of branched and tubular structures of brain cells, and it is specifically designed for: i) effective semi-automatic creation of skeletons from surface-based representations of cells and structures, ii) fast proofreading, i.e., correcting and editing of semi-automatically constructed skeleton representations, and iii) useful exploration, i.e., measuring, comparing, and analyzing geometric features related to cellular structures based on medial axis representations. The application runs in a standard PC-tethered virtual reality (VR) setup with a head mounted display (HMD), controllers, and tracking sensors. The system is currently used by neuroscientists for performing morphology studies on sparse reconstructions of glial cells and neurons extracted from a sample of the somatosensory cortex of a juvenile rat.Item InShaDe: Invariant Shape Descriptors for Visual Analysis of Histology 2D Cellular and Nuclear Shapes(The Eurographics Association, 2020) Agus, Marco; Al-Thelaya, Khaled; Cali, Corrado; Boido, Marina M.; Yang, Yin; Pintore, Giovanni; Gobbetti, Enrico; Schneider, Jens; Kozlíková, Barbora and Krone, Michael and Smit, Noeska and Nieselt, Kay and Raidou, Renata GeorgiaWe present a shape processing framework for visual exploration of cellular nuclear envelopes extracted from histology images. The framework is based on a novel shape descriptor of closed contours relying on a geodesically uniform resampling of discrete curves to allow for discrete differential-geometry-based computation of unsigned curvature at vertices and edges. Our descriptor is, by design, invariant under translation, rotation and parameterization. Moreover, it additionally offers the option for uniform-scale-invariance. The optional scale-invariance is achieved by scaling features to z-scores, while invariance under parameterization shifts is achieved by using elliptic Fourier analysis (EFA) on the resulting curvature vectors. These invariant shape descriptors provide an embedding into a fixed-dimensional feature space that can be utilized for various applications: (i) as input features for deep and shallow learning techniques; (ii) as input for dimension reduction schemes for providing a visual reference for clustering collection of shapes. The capabilities of the proposed framework are demonstrated in the context of visual analysis and unsupervised classification of histology images.Item Interactive Volumetric Visual Analysis of Glycogen-derived Energy Absorption in Nanometric Brain Structures(The Eurographics Association and John Wiley & Sons Ltd., 2019) Agus, Marco; Calì, Corrado; Al-Awami, Ali K.; Gobbetti, Enrico; Magistretti, Pierre J.; Hadwiger, Markus; Gleicher, Michael and Viola, Ivan and Leitte, HeikeDigital acquisition and processing techniques are changing the way neuroscience investigation is carried out. Emerging applications range from statistical analysis on image stacks to complex connectomics visual analysis tools targeted to develop and test hypotheses of brain development and activity. In this work, we focus on neuroenergetics, a field where neuroscientists analyze nanoscale brain morphology and relate energy consumption to glucose storage in form of glycogen granules. In order to facilitate the understanding of neuroenergetic mechanisms, we propose a novel customized pipeline for the visual analysis of nanometric-level reconstructions based on electron microscopy image data. Our framework supports analysis tasks by combining i) a scalable volume visualization architecture able to selectively render image stacks and corresponding labelled data, ii) a method for highlighting distance-based energy absorption probabilities in form of glow maps, and iii) a hybrid connectivitybased and absorption-based interactive layout representation able to support queries for selective analysis of areas of interest and potential activity within the segmented datasets. This working pipeline is currently used in a variety of studies in the neuroenergetics domain. Here, we discuss a test case in which the framework was successfully used by domain scientists for the analysis of aging effects on glycogen metabolism, extracting knowledge from a series of nanoscale brain stacks of rodents somatosensory cortex.Item Mixed Reality for Orthopedic Elbow Surgery Training and Operating Room Applications: A Preliminary Analysis(The Eurographics Association, 2023) Cangelosi, Antonio; Riberi, Giacomo; Salvi, Massimo; Molinari, Filippo; Titolo, Paolo; Agus, Marco; Calì, Corrado; Banterle, Francesco; Caggianese, Giuseppe; Capece, Nicola; Erra, Ugo; Lupinetti, Katia; Manfredi, GildaThe use of Mixed Reality in medicine is widely documented to be a candidate to revolutionize surgical interventions. In this paper we present a system to simulate k-wire placement, that is a common orthopedic procedure used to stabilize fractures, dislocations, and other traumatic injuries.With the described system, it is possible to leverage Mixed Reality (MR) and advanced visualization techniques applied on a surgical simulation phantom to enhance surgical training and critical orthopedic surgical procedures. This analysis is centered on evaluating the precision and proficiency of k-wire placement in an elbow surgical phantom, designed with a 3D modeling software starting from a virtual 3D anatomical reference. By visually superimposing 3D reconstructions of internal structures and the target K-wire positioning on the physical model, it is expected not only to improve the learning curve but also to establish a foundation for potential real-time surgical guidance in challenging clinical scenarios. The performance is measured as the difference between K-wires real placement in respect to target position; the quantitative measurements are then used to compare the risk of iatrogenic injury to nerves and vascular structures of MRguided vs non MR-guided simulated interventions.Item SlowDeepFood: a Food Computing Framework for Regional Gastronomy(The Eurographics Association, 2021) Gilal, Nauman Ullah; Al-Thelaya, Khaled; Schneider, Jens; She, James; Agus, Marco; Frosini, Patrizio and Giorgi, Daniela and Melzi, Simone and Rodolà, EmanueleFood computing recently emerged as a stand-alone research field, in which artificial intelligence, deep learning, and data science methodologies are applied to the various stages of food production pipelines. Food computing may help end-users in maintaining healthy and nutritious diets by alerting of high caloric dishes and/or dishes containing allergens. A backbone for such applications, and a major challenge, is the automated recognition of food by means of computer vision. It is therefore no surprise that researchers have compiled various food data sets and paired them with well-performing deep learning architecture to perform said automatic classification. However, local cuisines are tied to specific geographic origins and are woefully underrepresented in most existing data sets. This leads to a clear gap when it comes to food computing on regional and traditional dishes. While one might argue that standardized data sets of world cuisine cover the majority of applications, such a stance would neglect systematic biases in data collection. It would also be at odds with recent initiatives such as SlowFood, seeking to support local food traditions and to preserve local contributions to the global variation of food items. To help preserve such local influences, we thus present a full end-to-end food computing network that is able to: (i) create custom image data sets semi-automatically that represent traditional dishes; (ii) train custom classification models based on the EfficientNet family using transfer learning; (iii) deploy the resulting models in mobile applications for real-time inference of food images acquired through smart phone cameras. We not only assess the performance of the proposed deep learning architecture on standard food data sets (e.g., our model achieves 91:91% accuracy on ETH’'s Food-101), but also demonstrate the performance of our models on our own, custom data sets comprising local cuisine, such as the Pizza-Styles data set and GCC-30. The former comprises 14 categories of pizza styles, whereas the latter contains 30 Middle Eastern dishes from the Gulf Cooperation Council members.Item SPIDER: SPherical Indoor DEpth Renderer(The Eurographics Association, 2022) Tukur, Muhammad; Pintore, Giovanni; Gobbetti, Enrico; Schneider, Jens; Agus, Marco; Cabiddu, Daniela; Schneider, Teseo; Allegra, Dario; Catalano, Chiara Eva; Cherchi, Gianmarco; Scateni, RiccardoToday's Extended Reality (XR) applications that call for specific Diminished Reality (DR) strategies to hide specific classes of objects are increasingly using 360? cameras, which can capture entire areas in a single picture. In this work, we present an interactive-based image editing and rendering system named SPIDER, that takes a spherical 360? indoor scene as input. The system incorporates the output of deep learning models to abstract the segmentation and depth images of full and empty rooms to allow users to perform interactive exploration and basic editing operations on the reconstructed indoor scene, namely: i) rendering of the scene in various modalities (point cloud, polygonal, wireframe) ii) refurnishing (transferring portions of rooms) iii) deferred shading through the usage of precomputed normal maps. These kinds of scene editing and manipulations can be used for assessing the inference from deep learning models and enable several Mixed Reality (XR) applications in areas such as furniture retails, interior designs, and real estates. Moreover, it can also be useful in data augmentation, arts, designs, and paintings.Item STAG 2019: Frontmatter(Eurographics Association, 2019) Agus, Marco; Corsini, Massimiliano; Pintus, Ruggero; Agus, Marco and Corsini, Massimiliano and Pintus, Ruggero