Ankan Bansal, University of Maryland, USA; Rajeev Ranjan, University of Maryland, USA; Carlos D. Castillo, University of Maryland, USA; Rama Chellappa, University of Maryland, USA
Naman Kohli, West Virginia University, USA; Daksha Yadav, West Virginia University, USA; Afzel Noore, Texas A&M University-Kingsville, USA
Skand Vishwanath Peri, Indian Institute of Technology Ropar, India; Abhinav Dhall, Indian Institute of Technology Ropar, India
Vineet Kushwaha, IIIT-Delhi, India; Maneet Singh, IIIT-Delhi, India; Richa Singh, IIIT-Delhi, India; Mayank Vatsa, IIIT-Delhi, India; Nalini Ratha, IBM TJ Watson Research Center, USA; Rama Chellappa, University of Maryland, USA
Title: The Future of Face Recognition: Disguise, Attacks, and Accuracy
This talk will focus on where face recognition is going in terms of presentation attacks, facial disguise and recognition.
Panelists: Terry Boult (UCCS), Gang Hua (Microsoft), Xiaoming Liu (MSU), Manohar Paluri (Facebook)
Title: Human Identification from Images When Faces are Disguised
Identification of humans from images most typically relies upon facial comparison analysis. Facial comparison experts utilize a morphological approach to conducting such examinations and recent black box studies demonstrate the effectiveness of this approach. In the absence of a full facial image, examiners will leverage as many visible characteristics as possible to conduct a morphological analysis, taking care to exercise additional caution in the ultimate conclusion. Typical efforts to disguise identity encountered in case work includes covering the ocular region (e.g., sunglasses) or the nose and mouth region (e.g., balaclavas). Ears and tattoos (whether on the head or elsewhere) are also intrinsic components of the approach used in casework, so their visibility also offers an important feature to examine, when visible. This presentation will provide more insight into the approach used by forensic examiners to leverage such characteristics and will also make reference to secondary analyses which may also be useful in identification scenarios, including the role of height determination and clothing and footwear comparisons.
Kaipeng Zhang, National Taiwan University, Taipei, Taiwan; Ya-Liang Chang, National Taiwan University, Taipei, Taiwan; Winston Hsu, National Taiwan University, Taipei, Taiwan
Evgeny Smirnov, Speech Technology Center; Aleksandr Melnikov, ITMO University; Andrei Oleinik, ITMO University; Elizaveta Ivanova, Speech Technology Center; Ilya Kalinovskiy, Speech Technology Center; Eugene Lukyanets, ITMO University
Jun Lio, The Hong Kong Polytechnic University, Hong Kong; Ajay Kumar, The Hong Kong Polytechnic University, Hong Kong
Title: On Face Segmentation, Face Swapping and Face Perception
This talk discusses face swapping under the most extreme viewing settings, and its implications on face identification. We will show that even when face images are unconstrained and arbitrarily paired, face swapping between them is actually quite simple. In particular, we will (a) explain how a standard fully convolutional network (FCN) can achieve remarkably fast and accurate segmentations by providing it with rich example sets. For this purpose, we describe novel data collection and generation routines which provide challenging segmented face examples with little cost required to collect this training data. (b) Show how the segmentations obtained by our system enable robust face swapping under unprecedented conditions. Finally, (c), unlike previous work, these swapped faces are robust enough to allow for extensive quantitative tests. To this end, we will present results obtained on the Labeled Faces in theWild (LFW) benchmark, measuring the effect of intra- and inter-subject face swapping on recognition. These results show that our intra-subject swapped faces remain as recognizable as their sources, testifying to the effectiveness of the swapping method. In line with well known perceptual studies, we further show that better face swapping produces less recognizable inter-subject results. This is the first time this effect was quantitatively demonstrated for machine vision systems.