• databases@iab-rubric.org
  • IIT Jodhpur
media

Adversarial Perturbations

The research on adversarial learning has three key components: (i) creating adversarial images, (ii) detecting whether an image is adversely altered or not, and (iii) mitigating the effect of the adversarial perturbation process. These adversaries create different kinds of effect on the input and detecting them requires the application of a combination of hand-crafted as well as learned features; for instance, some of the existing attacks can be detected using principal components while some hand-crafted attacks can be detected using well-defined image processing operations. We are focusing on these three key ideas related to adversarial learning (aka perturbations, detection, and mitigation), building from basics of adversarial learning to discussing new algorithms for detection and mitigation.

BTAS 2018 Tutorial Presentation: PDF
WIFS Tutorial Presentation: PDF
FG 2019 Presentation: PDF

Related Publications: