Browsing by Author "Ottley, Alvitta"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Follow The Clicks: Learning and Anticipating Mouse Interactions During Exploratory Data Analysis(The Eurographics Association and John Wiley & Sons Ltd., 2019) Ottley, Alvitta; Garnett, Roman; Wan, Ran; Gleicher, Michael and Viola, Ivan and Leitte, HeikeThe goal of visual analytics is to create a symbiosis between human and computer by leveraging their unique strengths. While this model has demonstrated immense success, we are yet to realize the full potential of such a human-computer partnership. In a perfect collaborative mixed-initiative system, the computer must possess skills for learning and anticipating the users' needs. Addressing this gap, we propose a framework for inferring attention from passive observations of the user's click, thereby allowing accurate predictions of future events. We demonstrate this technique with a crime map and found that users' clicks can appear in our prediction set 92% - 97% of the time. Further analysis shows that we can achieve high prediction accuracy typically after three clicks. Altogether, we show that passive observations of interaction data can reveal valuable information that will allow the system to learn and anticipate future events.Item A Grammar-Based Approach for Applying Visualization Taxonomies to Interaction Logs(The Eurographics Association and John Wiley & Sons Ltd., 2022) Gathani, Sneha; Monadjemi, Shayan; Ottley, Alvitta; Battle, Leilani; Borgo, Rita; Marai, G. Elisabeta; Schreck, TobiasResearchers collect large amounts of user interaction data with the goal of mapping user's workflows and behaviors to their high-level motivations, intuitions, and goals. Although the visual analytics community has proposed numerous taxonomies to facilitate this mapping process, no formal methods exist for systematically applying these existing theories to user interaction logs. This paper seeks to bridge the gap between visualization task taxonomies and interaction log data by making the taxonomies more actionable for interaction log analysis. To achieve this, we leverage structural parallels between how people express themselves through interactions and language by reformulating existing theories as regular grammars.We represent interactions as terminals within a regular grammar, similar to the role of individual words in a language, and patterns of interactions or non-terminals as regular expressions over these terminals to capture common language patterns. To demonstrate our approach, we generate regular grammars for seven existing visualization taxonomies and develop code to apply them to three public interaction log datasets. In analyzing these regular grammars, we find that the taxonomies at the low-level (i.e., terminals) show mixed results in expressing multiple interaction log datasets, and taxonomies at the high-level (i.e., regular expressions) have limited expressiveness, due to primarily two challenges: inconsistencies in interaction log dataset granularity and structure, and under-expressiveness of certain terminals. Based on our findings, we suggest new research directions for the visualization community to augment existing taxonomies, develop new ones, and build better interaction log recording processes to facilitate the data-driven development of user behavior taxonomies.Item Guided By AI: Navigating Trust, Bias, and Data Exploration in AI-Guided Visual Analytics(The Eurographics Association and John Wiley & Sons Ltd., 2024) Ha, Sunwoo; Monadjemi, Shayan; Ottley, Alvitta; Aigner, Wolfgang; Archambault, Daniel; Bujack, RoxanaThe increasing integration of artificial intelligence (AI) in visual analytics (VA) tools raises vital questions about the behavior of users, their trust, and the potential of induced biases when provided with guidance during data exploration. We present an experiment where participants engaged in a visual data exploration task while receiving intelligent suggestions supplemented with four different transparency levels. We also modulated the difficulty of the task (easy or hard) to simulate a more tedious scenario for the analyst. Our results indicate that participants were more inclined to accept suggestions when completing a more difficult task despite the AI's lower suggestion accuracy. Moreover, the levels of transparency tested in this study did not significantly affect suggestion usage or subjective trust ratings of the participants. Additionally, we observed that participants who utilized suggestions throughout the task explored a greater quantity and diversity of data points. We discuss these findings and the implications of this research for improving the design and effectiveness of AI-guided VA tools.Item Human-Computer Collaboration for Visual Analytics: an Agent-based Framework(The Eurographics Association and John Wiley & Sons Ltd., 2023) Monadjemi, Shayan; Guo, Mengtian; Gotz, David; Garnett, Roman; Ottley, Alvitta; Bujack, Roxana; Archambault, Daniel; Schreck, TobiasThe visual analytics community has long aimed to understand users better and assist them in their analytic endeavors. As a result, numerous conceptual models of visual analytics aim to formalize common workflows, techniques, and goals leveraged by analysts. While many of the existing approaches are rich in detail, they each are specific to a particular aspect of the visual analytic process. Furthermore, with an ever-expanding array of novel artificial intelligence techniques and advances in visual analytic settings, existing conceptual models may not provide enough expressivity to bridge the two fields. In this work, we propose an agent-based conceptual model for the visual analytic process by drawing parallels from the artificial intelligence literature. We present three examples from the visual analytics literature as case studies and examine them in detail using our framework. Our simple yet robust framework unifies the visual analytic pipeline to enable researchers and practitioners to reason about scenarios that are becoming increasingly prominent in the field, namely mixed-initiative, guided, and collaborative analysis. Furthermore, it will allow us to characterize analysts, visual analytic settings, and guidance from the lenses of human agents, environments, and artificial agents, respectively.Item Inferential Tasks as an Evaluation Technique for Visualization(The Eurographics Association, 2022) Suh, Ashley; Mosca, Ab; Robinson, Shannon; Pham, Quinn; Cashman, Dylan; Ottley, Alvitta; Chang, Remco; Agus, Marco; Aigner, Wolfgang; Hoellt, ThomasDesigning suitable tasks for visualization evaluation remains challenging. Traditional evaluation techniques commonly rely on 'low-level' or 'open-ended' tasks to assess the efficacy of a proposed visualization, however, nontrivial trade-offs exist between the two. Low-level tasks allow for robust quantitative evaluations, but are not indicative of the complex usage of a visualization. Open-ended tasks, while excellent for insight-based evaluations, are typically unstructured and require time-consuming interviews. Bridging this gap, we propose inferential tasks: a complementary task category based on inferential learning in psychology. Inferential tasks produce quantitative evaluation data in which users are prompted to form and validate their own findings with a visualization. We demonstrate the use of inferential tasks through a validation experiment on two well-known visualization tools.Item Investigating the Role of Locus of Control in Moderating Complex Analytic Workflows(The Eurographics Association, 2020) Crouser, R. Jordan; Ottley, Alvitta; Swanson, Kendra; Montoly, Ananda; Kerren, Andreas and Garth, Christoph and Marai, G. ElisabetaThroughout the last decade, researchers have shown that the effectiveness of a visualization tool depends on the experience, personality, and cognitive abilities of the user. This work has also demonstrated that these individual traits can have significant implications for tools that support reasoning and decision-making with data. However, most studies in this area to date have involved only short-duration tasks performed by lay users. This short paper presents a preliminary analysis of a series of exercises with 22 trained intelligence analysts that seeks to deepen our understanding of how individual differences modulate expert behavior in complex analysis tasks.Item Linking and Layout: Exploring the Integration of Text and Visualization in Storytelling(The Eurographics Association and John Wiley & Sons Ltd., 2019) Zhi, Qiyu; Ottley, Alvitta; Metoyer, Ronald; Gleicher, Michael and Viola, Ivan and Leitte, HeikeModern web technologies are enabling authors to create various forms of text visualization integration for storytelling. This integration may shape the stories' flow and thereby affect the reading experience. In this paper, we seek to understand two text visualization integration forms: (i) different text and visualization spatial arrangements (layout), namely, vertical and slideshow; and (ii) interactive linking of text and visualization (linking). Here, linking refers to a bidirectional interaction mode that explicitly highlights the explanatory visualization element when selecting narrative text and vice versa. Through a crowdsourced study with 180 participants, we measured the effect of layout and linking on the degree to which users engage with the story (user engagement), their understanding of the story content (comprehension), and their ability to recall the story information (recall). We found that participants performed significantly better in comprehension tasks with the slideshow layout. Participant recall was better with the slideshow layout under conditions with linking versus no linking. We also found that linking significantly increased user engagement. Additionally, linking and the slideshow layout were preferred by the participants. We also explored user reading behaviors with different conditions.Item Mini-VLAT: A Short and Effective Measure of Visualization Literacy(The Eurographics Association and John Wiley & Sons Ltd., 2023) Pandey, Saugat; Ottley, Alvitta; Bujack, Roxana; Archambault, Daniel; Schreck, TobiasThe visualization community regards visualization literacy as a necessary skill. Yet, despite the recent increase in research into visualization literacy by the education and visualization communities, we lack practical and time-effective instruments for the widespread measurements of people's comprehension and interpretation of visual designs. We present Mini-VLAT, a brief but practical visualization literacy test. The Mini-VLAT is a 12-item short form of the 53-item Visualization Literacy Assessment Test (VLAT). The Mini-VLAT is reliable (coefficient omega = 0.72) and strongly correlates with the VLAT. Five visualization experts validated the Mini-VLAT items, yielding an average content validity ratio (CVR) of 0.6. We further validate Mini-VLAT by demonstrating a strong positive correlation between study participants' Mini-VLAT scores and their aptitude for learning an unfamiliar visualization using a Parallel Coordinate Plot test. Overall, the Mini-VLAT items showed a similar pattern of validity and reliability as the 53-item VLAT. The results show that Mini-VLAT is a psychometrically sound and practical short measure of visualization literacy.Item Survey on Individual Differences in Visualization(The Eurographics Association and John Wiley & Sons Ltd., 2020) Liu, Zhengliang; Crouser, R. Jordan; Ottley, Alvitta; Smit, Noeska and Oeltze-Jafra, Steffen and Wang, BeiDevelopments in data visualization research have enabled visualization systems to achieve great general usability and application across a variety of domains. These advancements have improved not only people's understanding of data, but also the general understanding of people themselves, and how they interact with visualization systems. In particular, researchers have gradually come to recognize the deficiency of having one-size-fits-all visualization interfaces, as well as the significance of individual differences in the use of data visualization systems. Unfortunately, the absence of comprehensive surveys of the existing literature impedes the development of this research. In this paper, we review the research perspectives, as well as the personality traits and cognitive abilities, visualizations, tasks, and measures investigated in the existing literature. We aim to provide a detailed summary of existing scholarship, produce evidence-based reviews, and spur future inquiry.Item Survey on the Analysis of User Interactions and Visualization Provenance(The Eurographics Association and John Wiley & Sons Ltd., 2020) Xu, Kai; Ottley, Alvitta; Walchshofer, Conny; Streit, Marc; Chang, Remco; Wenskovitch, John; Smit, Noeska and Oeltze-Jafra, Steffen and Wang, BeiThere is fast-growing literature on provenance-related research, covering aspects such as its theoretical framework, use cases, and techniques for capturing, visualizing, and analyzing provenance data. As a result, there is an increasing need to identify and taxonomize the existing scholarship. Such an organization of the research landscape will provide a complete picture of the current state of inquiry and identify knowledge gaps or possible avenues for further investigation. In this STAR, we aim to produce a comprehensive survey of work in the data visualization and visual analytics field that focus on the analysis of user interaction and provenance data. We structure our survey around three primary questions: (1) WHY analyze provenance data, (2) WHAT provenance data to encode and how to encode it, and (3) HOW to analyze provenance data. A concluding discussion provides evidence-based guidelines and highlights concrete opportunities for future development in this emerging area.