How to Intentionally Trick Neural Networks

A great introduction to 'hacking' neural networks. Adam Geitgey shows, how you can create a model that tries to slightly modify a network input in order to produce a different result in an existing classifier. He even covers more advanced circumstances like not having access to the actual classifier you want to fool and how to protect against such effects. All this is topped with a Keras implementation to get your hands dirty right away.


Want to receive more content like this in your inbox?