Feature Visualization: How Neural Networks Build Up Their Understanding of Images

distill.pub

There is a growing sense that neural networks need to be interpretable to humans. The field of neural network interpretability has formed in response to these concerns. As it matures, two major threads of research have begun to coalesce: feature visualization and attribution. This article focusses on feature visualization.

This is a fascinating read with some great images. It illustrates just how early we are at understanding the behaviors of neural networks and the cutting edge research that's going on to push us forwards.

Read more...
Linkedin

Want to receive more content like this in your inbox?