Adversarial Attacks Against Medical Deep Learning Systems

arxiv.org

The discovery of adversarial examples has raised concerns about the practical deployment of deep learning systems. In this paper, the authors argue that the field of medicine may be uniquely susceptible to adversarial attacks, both in terms of monetary incentives and technical vulnerability. To this end, they outline the healthcare economy and the incentives it creates for fraud, extend adversarial attacks to three popular medical imaging tasks, and provide concrete examples of how and why such attacks could be realistically carried out. 

Read more...
Linkedin

Want to receive more content like this in your inbox?