Explainable Machine Learning Models: Review and Computational Analysis

Posted by Šimon Růžička on February 02, 2024 · 1 min read

Computational Aspects of Machine Learning Seminar, CIT TUM, 2023, Written by Šimon Růžička and Utku Ipek

Abstract

Many innovative models providing explained decisions have appeared in the past years as a part of the quickly growing field of explainable machine learning. Each model fits different use cases, based on the task, input type, computational capabilities, and priorities in the accuracy-explainability trade-off. We provide a brief overview of multiple explanation approaches and describe their differences with special regard for their computational aspects. These include the time and memory complexity of the algorithms, as well as any possible optimizations. We highlight the difference between post-hoc and transparent explaining models and their different use cases. We focus in detail on three general groups of explaining models – simple model-agnostic explainers, models explaining predictions on images using focus maps, and inherently transparent predicting models. We provide multiple examples of models from each of the mentioned groups, describing their algorithms and comparing them against each other.

Documents

Read the full article, or have a look at the presentation slides that we created for the purpose of the seminar.