Browsing by Author "Schneider, Jens"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item HistoContours: a Framework for Visual Annotation of Histopathology Whole Slide Images(The Eurographics Association, 2022) Al-Thelaya, Khaled; Joad, Faaiz; Gilal, Nauman Ullah; Mifsud, William; Pintore, Giovanni; Gobbetti, Enrico; Agus, Marco; Schneider, Jens; Renata G. Raidou; Björn Sommer; Torsten W. Kuhlen; Michael Krone; Thomas Schultz; Hsiang-Yun WuWe present an end-to-end framework for histopathological analysis of whole slide images (WSIs). Our framework uses deep learning-based localization & classification of cell nuclei followed by spatial data aggregation to propagate classes of sparsely distributed nuclei across the entire slide. We use YOLO (''You Only Look Once'') for localization instead of more costly segmentation approaches and show that using HistAuGAN boosts its performance. YOLO finds bounding boxes around nuclei at good accuracy, but the classification accuracy can be improved by other methods. To this end, we extract patches around nuclei from the WSI and consider models from the SqueezeNet, ResNet, and EfficientNet families for classification. Where we do not achieve a clear separation between highest and second-highest softmax activation of the classifier, we use YOLO's output as a secondary vote. The result is a sparse annotation of the WSI, which we turn dense by using kernel density estimation. The result is a full vector of per pixel probabilities for each class of nucleus we consider. This allows us to visualize our results using both color-coding and isocontouring, reducing visual clutter. Our novel nuclei-to-tissue coupling allows histopathologists to work at both the nucleus and the tissue level, a feature appreciated by domain experts in a qualitative user study.Item InShaDe: Invariant Shape Descriptors for Visual Analysis of Histology 2D Cellular and Nuclear Shapes(The Eurographics Association, 2020) Agus, Marco; Al-Thelaya, Khaled; Cali, Corrado; Boido, Marina M.; Yang, Yin; Pintore, Giovanni; Gobbetti, Enrico; Schneider, Jens; Kozlíková, Barbora and Krone, Michael and Smit, Noeska and Nieselt, Kay and Raidou, Renata GeorgiaWe present a shape processing framework for visual exploration of cellular nuclear envelopes extracted from histology images. The framework is based on a novel shape descriptor of closed contours relying on a geodesically uniform resampling of discrete curves to allow for discrete differential-geometry-based computation of unsigned curvature at vertices and edges. Our descriptor is, by design, invariant under translation, rotation and parameterization. Moreover, it additionally offers the option for uniform-scale-invariance. The optional scale-invariance is achieved by scaling features to z-scores, while invariance under parameterization shifts is achieved by using elliptic Fourier analysis (EFA) on the resulting curvature vectors. These invariant shape descriptors provide an embedding into a fixed-dimensional feature space that can be utilized for various applications: (i) as input features for deep and shallow learning techniques; (ii) as input for dimension reduction schemes for providing a visual reference for clustering collection of shapes. The capabilities of the proposed framework are demonstrated in the context of visual analysis and unsupervised classification of histology images.Item SlowDeepFood: a Food Computing Framework for Regional Gastronomy(The Eurographics Association, 2021) Gilal, Nauman Ullah; Al-Thelaya, Khaled; Schneider, Jens; She, James; Agus, Marco; Frosini, Patrizio and Giorgi, Daniela and Melzi, Simone and Rodolà, EmanueleFood computing recently emerged as a stand-alone research field, in which artificial intelligence, deep learning, and data science methodologies are applied to the various stages of food production pipelines. Food computing may help end-users in maintaining healthy and nutritious diets by alerting of high caloric dishes and/or dishes containing allergens. A backbone for such applications, and a major challenge, is the automated recognition of food by means of computer vision. It is therefore no surprise that researchers have compiled various food data sets and paired them with well-performing deep learning architecture to perform said automatic classification. However, local cuisines are tied to specific geographic origins and are woefully underrepresented in most existing data sets. This leads to a clear gap when it comes to food computing on regional and traditional dishes. While one might argue that standardized data sets of world cuisine cover the majority of applications, such a stance would neglect systematic biases in data collection. It would also be at odds with recent initiatives such as SlowFood, seeking to support local food traditions and to preserve local contributions to the global variation of food items. To help preserve such local influences, we thus present a full end-to-end food computing network that is able to: (i) create custom image data sets semi-automatically that represent traditional dishes; (ii) train custom classification models based on the EfficientNet family using transfer learning; (iii) deploy the resulting models in mobile applications for real-time inference of food images acquired through smart phone cameras. We not only assess the performance of the proposed deep learning architecture on standard food data sets (e.g., our model achieves 91:91% accuracy on ETH’'s Food-101), but also demonstrate the performance of our models on our own, custom data sets comprising local cuisine, such as the Pizza-Styles data set and GCC-30. The former comprises 14 categories of pizza styles, whereas the latter contains 30 Middle Eastern dishes from the Gulf Cooperation Council members.Item SPIDER: SPherical Indoor DEpth Renderer(The Eurographics Association, 2022) Tukur, Muhammad; Pintore, Giovanni; Gobbetti, Enrico; Schneider, Jens; Agus, Marco; Cabiddu, Daniela; Schneider, Teseo; Allegra, Dario; Catalano, Chiara Eva; Cherchi, Gianmarco; Scateni, RiccardoToday's Extended Reality (XR) applications that call for specific Diminished Reality (DR) strategies to hide specific classes of objects are increasingly using 360? cameras, which can capture entire areas in a single picture. In this work, we present an interactive-based image editing and rendering system named SPIDER, that takes a spherical 360? indoor scene as input. The system incorporates the output of deep learning models to abstract the segmentation and depth images of full and empty rooms to allow users to perform interactive exploration and basic editing operations on the reconstructed indoor scene, namely: i) rendering of the scene in various modalities (point cloud, polygonal, wireframe) ii) refurnishing (transferring portions of rooms) iii) deferred shading through the usage of precomputed normal maps. These kinds of scene editing and manipulations can be used for assessing the inference from deep learning models and enable several Mixed Reality (XR) applications in areas such as furniture retails, interior designs, and real estates. Moreover, it can also be useful in data augmentation, arts, designs, and paintings.