Face recognition has received a lot of attention due to surveillance needed across a variety of security platforms ranging from border control to e-payments to secure office access. However, in the large crowd gathering in the festivals or the gaming events, identification of any possible suspects involving any avoidable mishappenings highly depends on the facial information. The information might not be effectively captured using the traditional surveillance cameras due to their significant distance from the gathering event locations. For that, the use of drone sensors is an ideal solution. However, the acquired images generally suffer from poor quality. The one probable reason for that is the effect of environmental factors such as turbulence. In this special session coupled with the challenge session, we want to make the first step towards unconstrained surveillance using face recognition. The challenge and session will not only help the development of novel algorithms needed to improve the face recognition performance but also disseminate the knowledge to the audience towards the possible future directions for real-world face recognition systems.
Face recognition in drone-shot videos has applicability in scenarios such as identifying individuals stuck at remote locations or in crowded places monitored via a drone. Recently, IARPA’s Biometric Recognition and Identification at Altitude and Range (BRIAR) program  also emphasizes the challenging problem of identifying individuals from long-range at elevated platforms. The DroneSURF dataset contains over 200 videos of 58 subjects captured across 411K frames. A pre-defined protocol is provided with the dataset, where 34 subjects form the training partition, and the remaining are used for testing. Along with the drone-shot videos, the dataset also contains four HR face images of each subject as the gallery images. Two protocols have been provided: (i) Active surveillance: where the drone actively follows the subjects, and (ii) Passive surveillance: where the drone monitors a particular area.
The challenge will involve two tasks:
In this challenge we will use recently proposed Drone Face Database. The Drone SURF prepared by Kalra et al.  has been used to study the impact of super-resolution on face recognition. The database is the largest available database with the identity information useful for matching. The images in the database are captured at two different locations reflecting two real-world use cases of a drone. In the first location, multiple subjects are asked to move on the terrace of a building. The subjects can move in whatever direction or order they want to move and therefore can lead to occluded from the objects presents such as a tree or other subjects. In the second location, the humans are asked to walk on an empty ground from one location (say `$A$') to another (say `$B$') and roam in an area lying in between. The images are captured in unconstrained environments and taken at two different periods of the day, i.e., morning and evening. In both cases, a drone is kept moving and keeps a significant distance from the subjects.
 Kalra I, Singh M, Nagpal S, Singh R, Vatsa M, Sujit PB. Dronesurf: Benchmark dataset for drone-based face recognition. In IEEE International Conference on Automatic Face & Gesture Recognition (FG), 2019 (pp. 1-7).
The evaluation will be performed using the standard deep learning or machine learning libraries such as PyTorch and Tensorflow. The evaluation metrics for face verification will be EER and TPR at fixed FPR. Similarly, the evaluation of image enhancement will be performed using metrics such as stealthiness, similarity metrics including PSNR and SSIM.
More details coming soon.