With recent advancements in deep learning, the capabilities of automatic face recognition has been significantly increased. However, face recognition in unconstrained environment with non-cooperative users is still a research challenge, pertinent for users such as law enforcement agencies. While several covariates such as pose, expression, illumination, aging, and low resolution have received significant attention, “disguise” is still considered an arduous covariate of face recognition. Disguise as a covariate involves both intentional and unintentional changes on a face through which one can either obfuscate his/her identity or impersonate someone else’s identity. The problem can be further exacerbated due to unconstrained environment or “in the wild” scenarios. However, disguise in the wild has not been studied in a comprehensive way, primarily due to unavailability of such as database. As part of 1st International Workshop on Disguised Face in the Wild at CVPR 2018, a competition is being held in which participants are asked to show their results on the Disguised Faces in the Wild (DFW) database.

"We invite researchers to participate in the “Disguised faces in the wild” competition and submit their research papers in the workshop"

Disguised Faces in the Wild Dataset

We prepared the Disguised Face Dataset containing over 11,000 images of 1000 subjects with different kinds of disguise variations.The subject folder consists of a subject, disguised and impersonator image, along with that we have provided the face coordinate file which have been generated from Faster RCNN.

Protocol

The DFW dataset consists of 1000 subject and total of 11155 images. Out of this dataset, 400 subjects comprise the training set and 600 subjects comprise the testing set. The subject folder consists of a subject, disguised, and impersonator images. Access to the DFW dataset is granted to participants after enrollment through the website(s) described above.

Participants can use external data for training the algorithms. However, the subjects and images in the testing set should not be used for any kind of training or preprocessing at any step of the algorithm.

Submission

  • Participants' are required to generate similarity scores (a larger value indicates greater similarity) from the biometric matchers. If a participant's matcher generates a dissimilarity score instead of a similarity score, the scores should be negated or inverted in some way so that the resulting value is a similarity measure. Participants in the competition have been provided with the testing set. From the data, the participants are required to generate and submit similarity matrices of size 7771 X 7771, the size of the testing data. The ordering of test images is same in both row and column. The (i,j) entry of a similarity matrix is the similarity score generated by the algorithm when supplied an image i from the testing set and query entry j as a probe sample. Entry (i,i) as it corresponds to matching the same image.
  • Participants are also required to submit the score matrix on the training database. The ordering should be exactly same as the order given in the text file containing subject names.
  • Participants are required to submit the matrices along with the companion data for the corresponding 1,000 points ROC curves. The match scores computed with validation and disguised images will comprise the genuine scores. Impostor scores will include match scores generated from impersonator images as well as cross subject matching scores.
  • While it is not mandatory, we also encourage the participants to submit their models/executable/API for verification of the results.
  • The participants can choose to remain anonymous in the analysis and report. Participants must explicitly make this request; the default position will be to associate results with participants. If you wish to keep your submission anonymous, kindly send an e-mail to rsingh@iiitd.ac.in with subject line "[DFW] Request for Anonymity".

Detailed Analysis

The protocol requires that training, as well as any preprocessing steps, use a set of subjects disjoint from the testing set. Here disjoint means that the test set should not be used for any training or preprocessing used by an algorithm. Along with comparing the verification performance on the entire testing set, more detailed analysis will be performed. The first analysis will be with respect to the face detection algorithm and it will be performed with the following three divisions.

  • Results computed using the facial coordinates provided by the competition
  • Results that include automated face detection as part of the recognition process.
  • Results that include manually annotated facial coordinate

To enable us to perform this analysis, participants are required to state whether their results were obtained using the competition-supplied facial coordinates or not. If not, then participants are to provide more details on how the face detection is performed. As an optional submission, participants are also invited to share their face localization meta-data.

Further analysis will be performed with respect to look alike, effect of disguise, and effect of manual vs. automated face detection. To support this division, it is important that the values are provided in the predefined score matrix and that there are no deviations whatsoever.

Report

As has become common for competitions such as this, at least one paper will be written by the organizers summarizing the findings of the competition. The purpose of this summary paper is three-fold. First, it will describe the scope and aims of the competition to the broader community. Second, it will provide, in one place, a record of how different approaches associated with different participants performed. Third, it will provide an opportunity for the organizers to report some analysis of these results across the various participants. As previously stated, participants may choose to remain anonymous in the subsequent reporting.

Performance across algorithms will be summarized in terms of ROC curves as well as a core performance value on those curves, namely verification rate at a fixed false accept rate of 0.01 and equal error rate. The performance for comparison will be computed from the submitted similarity matrices. The results can be submitted using a form to be made available.

Two phases of the competition and winners

The competition involves two phases. The early phase involves the participants with early access to the data with an opportunity to submit a paper describing their approach. The second phase participants will have more time to submit their score results but will not be able to submit a written paper. However, the second phase participants may be invited to orally present their work based on their performance results at the CVPR workshop. Teams may elect to participate in either or both phases.

Phase 1:

  • Enrollment deadline for the overall competition: February 23, 2018
  • Result Submissions due to Organizers: March 10, 2018
  • Invitation to top performing participants for paper submission (results will not be released publicly): Mar 12, 2018
  • Paper submissions by selected Competition Participants due to organizers: Mar 20, 2018
  • Notification provided authors: Apr 05, 2018
  • Camera-ready deadline: Apr 15, 2018

Phase 2:

  • Final results submissions to Organizers: May 01, 2018
  • Winners of the competition announced based on final Comparative Results: May 10, 2018

Results will be announced at the end of Phase 2 based on submission received up until the May 1st deadline. Participants are permitted to submit one set of results to Phase 1 and then resubmit a second set of revised/final results to Phase 2. No paper submission is possible for participants who only participate in Phase 2, but winners will be invited to give an oral presentation on their work at the CVPR workshop.

Awards

The competition will have two award winners in each of the following three categories:

  • Impersonation (First place and Runner up)
  • Obfuscation (First place and Runner up)
  • Overall accuracy (First place and Runner up)

In total there will be six awards with monetary prizes as follows:

  • $6,000 USD awarded to the top scoring submission in each category
  • $2,500 USD awarded to the runner up in each category

Prize awards for the DFW 2018 Competition are provided by the Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence (ODNI). Winners will be contacted by IARPA after the workshop to coordinate prize disbursements. The prize award(s) have the following restrictions:

> Foreign Nationals & International Developers: All Developers can win prize awards with this exception: residents of, Iran, Cuba, North Korea, Crimea Region of Ukraine, Sudan or Syria or other countries prohibited on the U.S. State Department’s State Sponsors of Terrorism list. In addition, Developers are not eligible to win prizes if they are on the Specially Designated National list promulgated and amended, from time to time, by the United States Department of the Treasury. It is the responsibility of the Developer to ensure that they are allowed to export their technology solution to the United States for the Live Test. Additionally, it is the responsibility of participants to ensure that no US law export control restrictions would prevent them from participating when foreign nationals are involved. If there are US export control concerns, please contact the organizers and we will attempt to make reasonable accommodations if possible.

> Janus Research Teams: Entities affiliated with the IARPA Janus program may participate in the DFW competition but are ineligible to receive any monetary prizes.

Questions?

In case of any difficulties or questions, please email to rsingh@iiitd.ac.in.

The prize award is supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA). The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.

Organizers

Rama Chellappa

University of Maryland

Mayank Vatsa

IIIT Delhi

Richa Singh

IIIT Delhi

Maneet Singh

IIIT Delhi

Vineet Kushwaha

IIIT Delhi