A new method compares the reasoning of a machine-learning model to that possessed by a human
Understanding why a machine-learning model makes certain decisions can be just as important as determining whether the decisions are correct. A machine-learning algorithm might predict correctly that a lesion on the skin is cancerous. However, it could have predicted this based on an unrelated blip in a clinical photograph.
Although there are tools to help experts understand a model’s reasoning, these methods often only provide insight on one decision at the time and must be manually assessed. Most models are trained with millions of data points, which makes it nearly impossible for humans to analyze enough decisions to detect patterns.
Researchers at MIT Research and IBM Research developed a new method to help users quickly analyze the behavior of a machine learning model. The technique they developed, Shared Interest incorporates quantitative metrics to compare the model’s reasoning with that of humans.