Interpretability & ICML 2017 best paper

arxiv.org

Interpretability is becoming more and more important. The ICML 2017 committee has acknowledged this by awarding the best paper award to Understanding Black-box Predictions via Influence Functions by Koh & Liang. It develops tools that allow us to scale up influence functions, a classic technique from statistics to modern ML settings in order to understand black-box predictions. For anyone who wants to read more, here is a great overview of ideas on interpreting ML by O'Reilly.

Read more...
Linkedin

Want to receive more content like this in your inbox?