Workshop Program

8:30 – 9:30 A.M.: Oral Session-1

Deep Features for Recognizing Disguised Faces in the Wild

Ankan Bansal, University of Maryland, USA; Rajeev Ranjan, University of Maryland, USA; Carlos D. Castillo, University of Maryland, USA; Rama Chellappa, University of Maryland, USA

Face Verification with Disguise Variations via Deep Disguise Recognizer

Naman Kohli, West Virginia University, USA; Daksha Yadav, West Virginia University, USA; Afzel Noore, Texas A&M University-Kingsville, USA

DisguiseNet: A Contrastive Approach for Disguised Face Verification in the Wild

Skand Vishwanath Peri, Indian Institute of Technology Ropar, India; Abhinav Dhall, Indian Institute of Technology Ropar, India

9:30 – 10:30 A.M.: DFW Summary, Invited Speaker-1, and Awards

Disguised Faces in the Wild

Vineet Kushwaha, IIIT-Delhi, India; Maneet Singh, IIIT-Delhi, India; Richa Singh, IIIT-Delhi, India; Mayank Vatsa, IIIT-Delhi, India; Nalini Ratha, IBM TJ Watson Research Center, USA; Rama Chellappa, University of Maryland, USA

Keynote Speaker: Christopher Boehnen, IARPA

Title: The Future of Face Recognition: Disguise, Attacks, and Accuracy


This talk will focus on where face recognition is going in terms of presentation attacks, facial disguise and recognition.

DFW2018 Competition Awards

10:30 -11:00 A.M.: Coffee

11:00 – 12:00 A.M.: Panel Session on “Attacks on Face Recognition”

Panel Chair: Ajay Kumar (HKPU)

Panelists: Terry Boult (UCCS), Gang Hua (Microsoft), Xiaoming Liu (MSU), Manohar Paluri (Facebook)

12:00 – 1:30 P.M.: Lunch

1:30 – 2:30 P.M.: Invited Speaker-2

Richard W. Vorder Bruegge, FBI

Title: Human Identification from Images When Faces are Disguised


Identification of humans from images most typically relies upon facial comparison analysis. Facial comparison experts utilize a morphological approach to conducting such examinations and recent black box studies demonstrate the effectiveness of this approach. In the absence of a full facial image, examiners will leverage as many visible characteristics as possible to conduct a morphological analysis, taking care to exercise additional caution in the ultimate conclusion. Typical efforts to disguise identity encountered in case work includes covering the ocular region (e.g., sunglasses) or the nose and mouth region (e.g., balaclavas). Ears and tattoos (whether on the head or elsewhere) are also intrinsic components of the approach used in casework, so their visibility also offers an important feature to examine, when visible. This presentation will provide more insight into the approach used by forensic examiners to leverage such characteristics and will also make reference to secondary analyses which may also be useful in identification scenarios, including the role of height determination and clothing and footwear comparisons.

2:30 - 3:30 P.M.: Oral Session-2

Deep Disguised Faces Recognition

Kaipeng Zhang, National Taiwan University, Taipei, Taiwan; Ya-Liang Chang, National Taiwan University, Taipei, Taiwan; Winston Hsu, National Taiwan University, Taipei, Taiwan

Hard Example Mining with Auxiliary Embeddings

Evgeny Smirnov, Speech Technology Center; Aleksandr Melnikov, ITMO University; Andrei Oleinik, ITMO University; Elizaveta Ivanova, Speech Technology Center; Ilya Kalinovskiy, Speech Technology Center; Eugene Lukyanets, ITMO University

Detecting Presentation Attacks from 3D Face Masks under Multispectral Imaging

Jun Lio, The Hong Kong Polytechnic University, Hong Kong; Ajay Kumar, The Hong Kong Polytechnic University, Hong Kong

3:30 - 4:00 P.M.: Invited Speaker-3

Gerard Medioni, USC

Title: On Face Segmentation, Face Swapping and Face Perception


This talk discusses face swapping under the most extreme viewing settings, and its implications on face identification. We will show that even when face images are unconstrained and arbitrarily paired, face swapping between them is actually quite simple. In particular, we will (a) explain how a standard fully convolutional network (FCN) can achieve remarkably fast and accurate segmentations by providing it with rich example sets. For this purpose, we describe novel data collection and generation routines which provide challenging segmented face examples with little cost required to collect this training data. (b) Show how the segmentations obtained by our system enable robust face swapping under unprecedented conditions. Finally, (c), unlike previous work, these swapped faces are robust enough to allow for extensive quantitative tests. To this end, we will present results obtained on the Labeled Faces in theWild (LFW) benchmark, measuring the effect of intra- and inter-subject face swapping on recognition. These results show that our intra-subject swapped faces remain as recognizable as their sources, testifying to the effectiveness of the swapping method. In line with well known perceptual studies, we further show that better face swapping produces less recognizable inter-subject results. This is the first time this effect was quantitatively demonstrated for machine vision systems.

4:00 P.M.: Closing Remarks