Browsing by Author "Raidou, Renata Georgia"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item EuroVis 2021 Dirk Bartz Prize: Frontmatter(The Eurographics Association, 2021) Oeltze-Jafra, Steffen; Raidou, Renata Georgia; Oeltze-Jafra, Steffen and Raidou, Renata GeorgiaItem Predicting, Analyzing and Communicating Outcomes of COVID-19 Hospitalizations with Medical Images and Clinical Data(The Eurographics Association, 2022) Stritzel, Oliver; Raidou, Renata Georgia; Renata G. Raidou; Björn Sommer; Torsten W. Kuhlen; Michael Krone; Thomas Schultz; Hsiang-Yun WuWe propose PACO, a visual analytics framework to support the prediction, analysis, and communication of COVID-19 hospitalization outcomes. Although several real-world data sets about COVID-19 are openly available, most of the current research focuses on the detection of the disease. Until now, no previous work exists on combining insights from medical image data with knowledge extracted from clinical data, predicting the likelihood of an intensive care unit (ICU) visit, ventilation, or decease. Moreover, available literature has not yet focused on communicating such results to the broader society. To support the prediction, analysis and communication of the outcomes of COVID-19 hospitalizations on the basis of a publicly available data set comprising both electronic health data and medical image data [SSP∗21], we conduct the following three steps: (1) automated segmentation of the available X-ray images and processing of clinical data, (2) development of a model for the prediction of disease outcomes and a comparison to state-of-the-art prediction scores for both data sources, i.e., medical images and clinical data, and (3) the communication of outcomes to two different groups (i.e., clinical experts and the general population) through interactive dashboards. Preliminary results indicate that the prediction, analysis and communication of hospitalization outcomes is a significant topic in the context of COVID-19 prevention.Item Slice and Dice: A Physicalization Workflow for Anatomical Edutainment(The Eurographics Association and John Wiley & Sons Ltd., 2020) Raidou, Renata Georgia; Gröller, Eduard; Wu, Hsiang-Yun; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueDuring the last decades, anatomy has become an interesting topic in education-even for laymen or schoolchildren. As medical imaging techniques become increasingly sophisticated, virtual anatomical education applications have emerged. Still, anatomical models are often preferred, as they facilitate 3D localization of anatomical structures. Recently, data physicalizations (i.e., physical visualizations) have proven to be effective and engaging-sometimes, even more than their virtual counterparts. So far, medical data physicalizations involve mainly 3D printing, which is still expensive and cumbersome. We investigate alternative forms of physicalizations, which use readily available technologies (home printers) and inexpensive materials (paper or semi-transparent films) to generate crafts for anatomical edutainment. To the best of our knowledge, this is the first computergenerated crafting approach within an anatomical edutainment context. Our approach follows a cost-effective, simple, and easy-to-employ workflow, resulting in assemblable data sculptures (i.e., semi-transparent sliceforms). It primarily supports volumetric data (such as CT or MRI), but mesh data can also be imported. An octree slices the imported volume and an optimization step simplifies the slice configuration, proposing the optimal order for easy assembly. A packing algorithm places the resulting slices with their labels, annotations, and assembly instructions on a paper or transparent film of user-selected size, to be printed, assembled into a sliceform, and explored. We conducted two user studies to assess our approach, demonstrating that it is an initial positive step towards the successful creation of interactive and engaging anatomical physicalizations.Item Smoke Surfaces of 4D Biological Dynamical Systems(The Eurographics Association, 2023) Schindler, Marwin; Amirkhanov, Aleksandr; Raidou, Renata Georgia; Hansen, Christian; Procter, James; Renata G. Raidou; Jönsson, Daniel; Höllt, ThomasTo study biological phenomena, mathematical biologists often employ modeling with ordinary differential equations. A system of ordinary differential equations that describes the state of a phenomenon as a moving point in space across time is known as a dynamical system. This moving point emerges from the initial condition of the system and is referred to as a trajectory that ''lives'' in phase space, i.e., a space that defines all possible states of the system. In our previous work, we proposed Many- Lands [AKS*19]-an approach to explore and analyze typical trajectories of 4D dynamical systems, using smooth, animated transitions to navigate through phase space. However, in ManyLands the comparison of multiple trajectories emerging from different initial conditions does not scale well, due to overdrawing that clutters the view. We extend ManyLands to support the comparative visualization of multiple trajectories of a 4D dynamical system, making use of smoke surfaces. In this way, the sensitivity of the dynamical system to its initialization can be investigated. The 4D smoke surfaces can be further projected onto lower-dimensional subspaces (3D and 2D) with seamless animated transitions. We showcase the capabilities of our approach using two 4D dynamical systems from biology [Gol11, KJS06] and a 4D dynamical system exhibiting chaotic behavior [Bou15].Item Understanding the Impact of Statistical and Machine Learning Choices on Predictive Models for Radiotherapy(The Eurographics Association, 2022) Böröndy, Ádám; Furmanová, Katarína; Raidou, Renata Georgia; Renata G. Raidou; Björn Sommer; Torsten W. Kuhlen; Michael Krone; Thomas Schultz; Hsiang-Yun WuDuring radiotherapy (RT) planning, an accurate description of the location and shape of the pelvic organs is a critical factor for the successful treatment of the patient. Yet, during treatment, the pelvis anatomy may differ significantly from the planning phase. A series of recent publications, such as PREVIS [FMCM∗21], have examined alternative approaches to analyzing and predicting pelvic organ variability of individual patients. These approaches are based on a combination of several statistical and machine learning methods, which have not been thoroughly and quantitatively evaluated within the scope of pelvic anatomical variability. Several of their design decisions could have an impact on the outcome of the predictive model. The goal of this work is to assess the impact of alternative choices, focusing mainly on the two key-aspects of shape description and clustering, to generate better predictions for new patients. The results of our assessment indicate that resolution-based descriptors provide more accurate and reliable organ representations than state-of-the-art approaches, while different clustering settings (distance metric and linkage) yield only slightly different clusters. Different clustering methods are able to provide comparable results, although when more shape variability is considered their results start to deviate. These results are valuable for understanding the impact of statistical and machine learning choices on the outcomes of predictive models for anatomical variability.Item VCBM 2020: Frontmatter(The Eurographics Association, 2020) Kozlíková, Barbora; Krone, Michael; Smit, Noeska; Nieselt, Kay; Raidou, Renata Georgia; Kozlíková, Barbora and Krone, Michael and Smit, Noeska and Nieselt, Kay and Raidou, Renata GeorgiaItem Visual Analytics for the Integrated Exploration and Sensemaking of Cancer Cohort Radiogenomics and Clinical Information(The Eurographics Association, 2023) El-Sherbiny, Sarah; Ning, Jing; Hantusch, Brigitte; Kenner, Lukas; Raidou, Renata Georgia; Hansen, Christian; Procter, James; Renata G. Raidou; Jönsson, Daniel; Höllt, ThomasWe present a visual analytics (VA) framework for the comprehensive exploration and integrated analysis of radiogenomic and clinical data from a cancer cohort. Our framework aims to support the workflow of cancer experts and biomedical data scientists as they investigate cancer mechanisms. Challenges in the analysis of radiogenomic data, such as the heterogeneity and complexity of the data sets, hinder the exploration and sensemaking of the available patient information. These challenges can be answered through the field of VA, but approaches that bridge radiogenomic and clinical data in an interactive and flexible visual framework are still lacking. Our approach enables the integrated exploration and joint analysis of radiogenomic data and clinical information for knowledge discovery and hypothesis assessment through a flexible VA dashboard. We follow a user-centered design strategy, where we integrate domain knowledge into a semi-automated analytical workflow based on unsupervised machine learning to identify patterns in the patient data provided by our collaborating domain experts. An interactive visual interface further supports the exploratory and analytical process in a free and a hypothesis-driven manner. We evaluate the unsupervised machine learning models through similarity measures and assess the usability of the framework through use cases conducted with cancer experts. Expert feedback indicates that our framework provides suitable and flexible means for gaining insights into large and heterogeneous cancer cohort data, while also being easily extensible to other data sets.Item Visual Analytics to Assess Deep Learning Models for Cross-Modal Brain Tumor Segmentation(The Eurographics Association, 2022) Magg, Caroline; Raidou, Renata Georgia; Renata G. Raidou; Björn Sommer; Torsten W. Kuhlen; Michael Krone; Thomas Schultz; Hsiang-Yun WuAccurate delineations of anatomically relevant structures are required for cancer treatment planning. Despite its accuracy, manual labeling is time-consuming and tedious-hence, the potential of automatic approaches, such as deep learning models, is being investigated. A promising trend in deep learning tumor segmentation is cross-modal domain adaptation, where knowledge learned on one source distribution (e.g., one modality) is transferred to another distribution. Yet, artificial intelligence (AI) engineers developing such models, need to thoroughly assess the robustness of their approaches, which demands a deep understanding of the model(s) behavior. In this paper, we propose a web-based visual analytics application that supports the visual assessment of the predictive performance of deep learning-based models built for cross-modal brain tumor segmentation. Our application supports the multi-level comparison of multiple models drilling from entire cohorts of patients down to individual slices, facilitates the analysis of the relationship between image-derived features and model performance, and enables the comparative exploration of the predictive outcomes of the models. All this is realized in an interactive interface with multiple linked views. We present three use cases, analyzing differences in deep learning segmentation approaches, the influence of the tumor size, and the relationship of other data set characteristics to the performance. From these scenarios, we discovered that the tumor size, i.e., both volumetric in 3D data and pixel count in 2D data, highly affects the model performance, as samples with small tumors often yield poorer results. Our approach is able to reveal the best algorithms and their optimal configurations to support AI engineers in obtaining more insights for the development of their segmentation models.Item The Vitruvian Baby: Interactive Reformation of Fetal Ultrasound Data to a T-Position(The Eurographics Association, 2019) Mörth, Eric; Raidou, Renata Georgia; Smit, Noeska; Viola, Ivan; Madeiras Pereira, João and Raidou, Renata GeorgiaThree dimensional (3D) ultrasound is commonly used in prenatal screening, because it provides insight into the shape as well as the organs of the fetus. Currently, gynecologists take standardized measurements of the fetus and check for abnormalities by analyzing the data in a 2D slice view. The fetal pose may complicate taking precise measurements in such a view. Analyzing the data in a 3D view would enable the viewer to better distinguish between artefacts and representative information. Standardization in medical imaging techniques aims to make the data comparable between different investigations and patients. It is already used in different medical applications for example in magnetic resonance imaging (MRI). With this work, we introduce a novel approach to provide a standardization method for 3D ultrasound screenings of fetuses. The approach consists of six steps and is called ''The Vitruvian Baby''. The input is the data of the 3D ultrasound screening of a fetus and the output shows the fetus in a standardized T-pose in which measurements can be made. The precision of standardized measurements compared to the gold standard is for the finger to finger span 91,08% and for the head to toe measurement 94,05%.Item The Vitruvian Baby: Interactive Reformation of Fetal Ultrasound Data to a T-Position(The Eurographics Association, 2019) Mörth, Eric; Raidou, Renata Georgia; Viola, Ivan; Smit, Noeska; Kozlíková, Barbora and Linsen, Lars and Vázquez, Pere-Pau and Lawonn, Kai and Raidou, Renata GeorgiaThree-dimensional (3D) ultrasound imaging and visualization is often used in medical diagnostics, especially in prenatal screening. Screening the development of the fetus is important to assess possible complications early on. State of the art approaches involve taking standardized measurements to compare them with standardized tables. The measurements are taken in a 2D slice view, where precise measurements can be difficult to acquire due to the fetal pose. Performing the analysis in a 3D view would enable the viewer to better discriminate between artefacts and representative information. Additionally making data comparable between different investigations and patients is a goal in medical imaging techniques and is often achieved by standardization. With this paper, we introduce a novel approach to provide a standardization method for 3D ultrasound fetus screenings. Our approach is called ''The Vitruvian Baby'' and incorporates a complete pipeline for standardized measuring in fetal 3D ultrasound. The input of the method is a 3D ultrasound screening of a fetus and the output is the fetus in a standardized T-pose. In this pose, taking measurements is easier and comparison of different fetuses is possible. In addition to the transformation of the 3D ultrasound data, we create an abstract representation of the fetus based on accurate measurements. We demonstrate the accuracy of our approach on simulated data where the ground truth is known.