Machine Learning Methods in Visualisation for Big Data 2021

Permanent URI for this collection

Papers
Controllably Sparse Perturbations of Robust Classifiers for Explaining Predictions and Probing Learned Concepts
Jay Roberts and Theodoros Tsiligkaridis
Revealing Multimodality in Ensemble Weather Prediction
Natacha Galmiche, Helwig Hauser, Thomas Spengler, Clemens Spensberger, Morten Brun, and Nello Blaser

BibTeX (Machine Learning Methods in Visualisation for Big Data 2021)
@inproceedings{
10.2312:mlvis.20211072,
booktitle = {
Machine Learning Methods in Visualisation for Big Data},
editor = {
Archambault, Daniel and Nabney, Ian and Peltonen, Jaakko
}, title = {{
Controllably Sparse Perturbations of Robust Classifiers for Explaining Predictions and Probing Learned Concepts}},
author = {
Roberts, Jay
 and
Tsiligkaridis, Theodoros
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-146-5},
DOI = {
10.2312/mlvis.20211072}
}
@inproceedings{
10.2312:mlvis.20211073,
booktitle = {
Machine Learning Methods in Visualisation for Big Data},
editor = {
Archambault, Daniel and Nabney, Ian and Peltonen, Jaakko
}, title = {{
Revealing Multimodality in Ensemble Weather Prediction}},
author = {
Galmiche, Natacha
 and
Hauser, Helwig
 and
Spengler, Thomas
 and
Spensberger, Clemens
 and
Brun, Morten
 and
Blaser, Nello
}, year = {
2021},
publisher = {
The Eurographics Association},
ISBN = {978-3-03868-146-5},
DOI = {
10.2312/mlvis.20211073}
}

Browse

Recent Submissions

Now showing 1 - 3 of 3
  • Item
    MLVis 2021: Frontmatter
    (The Eurographics Association, 2021) Archambault, Daniel; Nabney, Ian; Peltonen, Jaakko; Archambault, Daniel and Nabney, Ian and Peltonen, Jaakko
  • Item
    Controllably Sparse Perturbations of Robust Classifiers for Explaining Predictions and Probing Learned Concepts
    (The Eurographics Association, 2021) Roberts, Jay; Tsiligkaridis, Theodoros; Archambault, Daniel and Nabney, Ian and Peltonen, Jaakko
    Explaining the predictions of a deep neural network (DNN) in image classification is an active area of research. Many methods focus on localizing pixels, or groups of pixels, which maximize a relevance metric for the prediction. Others aim at creating local "proxy" explainers which aim to account for an individual prediction of a model. We aim to explore "why" a model made a prediction by perturbing inputs to robust classifiers and interpreting the semantically meaningful results. For such an explanation to be useful for humans it is desirable for it to be sparse; however, generating sparse perturbations can computationally expensive and infeasible on high resolution data. Here we introduce controllably sparse explanations that can be efficiently generated on higher resolution data to provide improved counter-factual explanations. Further we use these controllably sparse explanations to probe what the robust classifier has learned. These explanations could provide insight for model developers as well as assist in detecting dataset bias.
  • Item
    Revealing Multimodality in Ensemble Weather Prediction
    (The Eurographics Association, 2021) Galmiche, Natacha; Hauser, Helwig; Spengler, Thomas; Spensberger, Clemens; Brun, Morten; Blaser, Nello; Archambault, Daniel and Nabney, Ian and Peltonen, Jaakko
    Ensemble methods are widely used to simulate complex non-linear systems and to estimate forecast uncertainty. However, visualizing and analyzing ensemble data is challenging, in particular when multimodality arises, i.e., distinct likely outcomes. We propose a graph-based approach that explores multimodality in univariate ensemble data from weather prediction. Our solution utilizes clustering and a novel concept of life span associated with each cluster. We applied our method to historical predictions of extreme weather events and illustrate that our method aids the understanding of the respective ensemble forecasts.