The Building Blocks of Interpretability

distill.pub

This article combines multiple interpretability techniques to understand what's learned in different parts of GoogleNet and how everything works together to classify images. It offers great insights on the importance of floppy ears for image classification, gives a nice overview on the available techniques, and evaluates how much we can actually infer from the results. To top it off, everything is made clear using interactive visualizations and easily reproducible using Jupyter notebooks.

Read more...
Linkedin

Want to receive more content like this in your inbox?