The Marginal Value of Adaptive Gradient Methods in Machine Learning

arxiv.org

Adaptive optimization methods, which perform local optimization with a metric constructed from the history of iterates, are becoming increasingly popular for training deep neural networks. The authors construct an illustrative binary classification problem where the data is linearly separable, GD and SGD achieve zero test error, and AdaGrad, Adam, and RMSProp attain test errors arbitrarily close to half.

Read more...
Linkedin

Want to receive more content like this in your inbox?