Hamiltonian Descent Methods

arxiv.org

The authors propose an interesting family of optimization methods that achieve linear convergence using first-order gradient information and constant step sizes on a class of convex functions much larger than the smooth and strongly convex ones. This larger class includes functions whose second derivatives may be singular or unbounded at their minima. In sum, these methods expand the class of convex functions on which linear convergence is possible with first-order computation.

Read more...
Linkedin

Want to receive more content like this in your inbox?