Slides to a talk given by Hugo Larochelle at the 2017 Deep Learning School in Montreal the video of which is unfortunately not yet available. The talk list a number of unintuitive properties of deep neural networks which we do not yet fully understand and contains links to the papers exploring these points:
- They can make dumb errors
- They are strangely non-convex
- They work best when badly trained
- They can easily memorize
- They can be compressed
- They are influenced by initialization and first examples, yet they forget what they learned.