Robust Adversarial Examples [OpenAI]

We’ve created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like.

Malicious intent is infrequently a consideration in most current data science applications. Techniques will need to adapt once it is.


Want to receive more content like this in your inbox?