Grouping-By-ID: Guarding Against Adversarial Domain Shifts

When training a deep network for image classification, one can broadly distinguish between two types of latent features that will drive the classification’; “core" features whose distribution does not change substantially across domains and “style" or "orthogonal" features whose distribution can change substantially across domains. These latter orthogonal features would generally include features such as position or brightness but also more complex ones like hair color or posture for images of persons. The authors develop a novel method based on a causal framework to guard against future adversarial domain shifts by constraining the network to just use the ”core" features for classification.


Want to receive more content like this in your inbox?