Browsing by Author "Cashman, Dylan"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Inferential Tasks as an Evaluation Technique for Visualization(The Eurographics Association, 2022) Suh, Ashley; Mosca, Ab; Robinson, Shannon; Pham, Quinn; Cashman, Dylan; Ottley, Alvitta; Chang, Remco; Agus, Marco; Aigner, Wolfgang; Hoellt, ThomasDesigning suitable tasks for visualization evaluation remains challenging. Traditional evaluation techniques commonly rely on 'low-level' or 'open-ended' tasks to assess the efficacy of a proposed visualization, however, nontrivial trade-offs exist between the two. Low-level tasks allow for robust quantitative evaluations, but are not indicative of the complex usage of a visualization. Open-ended tasks, while excellent for insight-based evaluations, are typically unstructured and require time-consuming interviews. Bridging this gap, we propose inferential tasks: a complementary task category based on inferential learning in psychology. Inferential tasks produce quantitative evaluation data in which users are prompted to form and validate their own findings with a visualization. We demonstrate the use of inferential tasks through a validation experiment on two well-known visualization tools.Item A User-based Visual Analytics Workflow for Exploratory Model Analysis(The Eurographics Association and John Wiley & Sons Ltd., 2019) Cashman, Dylan; Humayoun, Shah Rukh; Heimerl, Florian; Park, Kendall; Das, Subhajit; Thompson, John; Saket, Bahador; Mosca, Abigail; Stasko, John; Endert, Alex; Gleicher, Michael; Chang, Remco; Gleicher, Michael and Viola, Ivan and Leitte, HeikeMany visual analytics systems allow users to interact with machine learning models towards the goals of data exploration and insight generation on a given dataset. However, in some situations, insights may be less important than the production of an accurate predictive model for future use. In that case, users are more interested in generating of diverse and robust predictive models, verifying their performance on holdout data, and selecting the most suitable model for their usage scenario. In this paper, we consider the concept of Exploratory Model Analysis (EMA), which is defined as the process of discovering and selecting relevant models that can be used to make predictions on a data source. We delineate the differences between EMA and the well-known term exploratory data analysis in terms of the desired outcome of the analytic process: insights into the data or a set of deployable models. The contributions of this work are a visual analytics system workflow for EMA, a user study, and two use cases validating the effectiveness of the workflow. We found that our system workflow enabled users to generate complex models, to assess them for various qualities, and to select the most relevant model for their task.