Explore Explainable AI with Class Maps: a Visual Tool to Understand Machine Learning Algorithms

Explainable AI (XAI), with Class Maps

This article introduces a new visual tool to explain the results of classification algorithm, using examples in R or Python.

The goal of classification algorithms is to determine to which group a collection of observations belongs. Machine learning practitioners typically build multiple models, and then select the final classifier that maximizes accuracy metrics for a test set. Practitioners and stakeholders may want more than predictions from a classification model. Some practitioners and stakeholders may want to know why a classification model makes certain decisions, particularly when it’s being used for applications with high stakes. Consider a medical situation where a classifier identifies a patient as being at high-risk for an illness. If medical experts could learn what factors contributed to the prediction, this information would help them determine appropriate treatments.

Transparent models are those that show how decisions are made. Complex models are the opposite. They tend to be referred to by their opponents as \”black box\” because they do not explain how they make decisions. Opting for transparent models rather than black boxes doesn’t always solve the problem of explainability. Transparency can be sacrificed for accuracy when the relationship between a series of observations and their labels is too complex.

Explainable AI is a collection of methods designed to help humans better understand machine learning models. Explainability is an important part of responsible AI development and usage.

Source:
https://towardsdatascience.com/explainable-ai-xai-with-class-maps-d0e137a91d2c