Recent Advances for a Better Understanding of Deep Learning - Part I

towardsdatascience.com

How can we understand the highly non-convex loss function associated with deep neural networks? Why does stochastic gradient descent even converge?

With thousands of papers submitted to ICML this year, it's impossible for normals like me to hope to stay on top of current research without kind souls writing summary posts. This is an excellent one, summarizing a recent ICML talk.

Read more...
Linkedin

Want to receive more content like this in your inbox?