Machine Learning Methods in Visualisation for Big Data
Permanent URI for this community
Browse
Browsing Machine Learning Methods in Visualisation for Big Data by Subject "Artificial intelligence"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Controllably Sparse Perturbations of Robust Classifiers for Explaining Predictions and Probing Learned Concepts(The Eurographics Association, 2021) Roberts, Jay; Tsiligkaridis, Theodoros; Archambault, Daniel and Nabney, Ian and Peltonen, JaakkoExplaining the predictions of a deep neural network (DNN) in image classification is an active area of research. Many methods focus on localizing pixels, or groups of pixels, which maximize a relevance metric for the prediction. Others aim at creating local "proxy" explainers which aim to account for an individual prediction of a model. We aim to explore "why" a model made a prediction by perturbing inputs to robust classifiers and interpreting the semantically meaningful results. For such an explanation to be useful for humans it is desirable for it to be sparse; however, generating sparse perturbations can computationally expensive and infeasible on high resolution data. Here we introduce controllably sparse explanations that can be efficiently generated on higher resolution data to provide improved counter-factual explanations. Further we use these controllably sparse explanations to probe what the robust classifier has learned. These explanations could provide insight for model developers as well as assist in detecting dataset bias.Item ModelSpeX: Model Specification Using Explainable Artificial Intelligence Methods(The Eurographics Association, 2020) Schlegel, Udo; Cakmak, Eren; Keim, Daniel A.; Archambault, Daniel and Nabney, Ian and Peltonen, JaakkoExplainable artificial intelligence (XAI) methods aim to reveal the non-transparent decision-making mechanisms of black-box models. The evaluation of insight generated by such XAI methods remains challenging as the applied techniques depend on many factors (e.g., parameters and human interpretation). We propose ModelSpeX, a visual analytics workflow to interactively extract human-centered rule-sets to generate model specifications from black-box models (e.g., neural networks). The workflow enables to reason about the underlying problem, to extract decision rule sets, and to evaluate the suitability of the model for a particular task. An exemplary usage scenario walks an analyst trough the steps of the workflow to show the applicability.