With recent advancements in deep learning, the capabilities of automatic face recognition has been significantly increased. However, face recognition in unconstrained environment with non-cooperative users is still a research challenge, pertinent for users such as law enforcement agencies. While several covariates such as pose, expression, illumination, aging, and low resolution have received significant attention, “disguise” is still considered an arduous covariate of face recognition. Disguise as a covariate involves both intentional and unintentional changes on a face through which one can either obfuscate his/her identity or impersonate someone else’s identity. The problem can be further exacerbated due to unconstrained environment or “in the wild” scenarios. However, disguise in the wild has not been studied in a comprehensive way, primarily due to unavailability of such as database. As part of 1st International Workshop on Disguised Face in the Wild at CVPR 2018, a competition is being held in which participants are asked to show their results on the Disguised Faces in the Wild (DFW) database.
"We invite researchers to participate in the “Disguised faces in the wild” competition and submit their research papers in the workshop"
We prepared the Disguised Face Dataset containing over 11,000 images of 1000 subjects with different kinds of disguise variations.The subject folder consists of a subject, disguised and impersonator image, along with that we have provided the face coordinate file which have been generated from Faster RCNN.Checksum for the dataset, CRC-32: dc5464d5, SHA-1: 43961cf4b3e1c138737f0116047a203cc3f7cb3b.
The DFW dataset consists of 1000 subject and total of 11155 images. Out of this dataset, 400 subjects comprise the training set and 600 subjects comprise the testing set. The subject folder consists of a subject, disguised, and impersonator images. Access to the DFW dataset is granted to participants after enrollment through the website(s) described above.
Participants can use external data for training the algorithms. However, the subjects and images in the testing set should not be used for any kind of training or preprocessing at any step of the algorithm.
The protocol requires that training, as well as any preprocessing steps, use a set of subjects disjoint from the testing set. Here disjoint means that the test set should not be used for any training or preprocessing used by an algorithm. Along with comparing the verification performance on the entire testing set, more detailed analysis will be performed. The first analysis will be with respect to the face detection algorithm and it will be performed with the following three divisions.
To enable us to perform this analysis, participants are required to state whether their results were obtained using the competition-supplied facial coordinates or not. If not, then participants are to provide more details on how the face detection is performed. As an optional submission, participants are also invited to share their face localization meta-data.
Further analysis will be performed with respect to look alike, effect of disguise, and effect of manual vs. automated face detection. To support this division, it is important that the values are provided in the predefined score matrix and that there are no deviations whatsoever.
As has become common for competitions such as this, at least one paper will be written by the organizers summarizing the findings of the competition. The purpose of this summary paper is three-fold. First, it will describe the scope and aims of the competition to the broader community. Second, it will provide, in one place, a record of how different approaches associated with different participants performed. Third, it will provide an opportunity for the organizers to report some analysis of these results across the various participants. As previously stated, participants may choose to remain anonymous in the subsequent reporting.
Performance across algorithms will be summarized in terms of ROC curves as well as a core performance value on those curves, namely verification rate at a fixed false accept rate of 0.01 and equal error rate. The performance for comparison will be computed from the submitted similarity matrices. The results can be submitted using a form to be made available.
The competition involves two phases. The early phase involves the participants with early access to the data with an opportunity to submit a paper describing their approach. The second phase participants will have more time to submit their score results but will not be able to submit a written paper. However, the second phase participants may be invited to orally present their work based on their performance results at the CVPR workshop. Teams may elect to participate in either or both phases.
Results will be announced at the end of Phase 2 based on submission received up until the May 1st deadline. Participants are permitted to submit one set of results to Phase 1 and then resubmit a second set of revised/final results to Phase 2. No paper submission is possible for participants who only participate in Phase 2, but winners will be invited to give an oral presentation on their work at the CVPR workshop.
The competition will have two award winners in each of the following three categories:
In total there will be six awards with monetary prizes as follows:
Prize awards for the DFW 2018 Competition are provided by the Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence (ODNI). Winners will be contacted by IARPA after the workshop to coordinate prize disbursements. The prize award(s) have the following restrictions:
> Foreign Nationals & International Developers: All Developers can win prize awards with this exception: residents of, Iran, Cuba, North Korea, Crimea Region of Ukraine, Sudan or Syria or other countries prohibited on the U.S. State Department’s State Sponsors of Terrorism list. In addition, Developers are not eligible to win prizes if they are on the Specially Designated National list promulgated and amended, from time to time, by the United States Department of the Treasury. It is the responsibility of the Developer to ensure that they are allowed to export their technology solution to the United States for the Live Test. Additionally, it is the responsibility of participants to ensure that no US law export control restrictions would prevent them from participating when foreign nationals are involved. If there are US export control concerns, please contact the organizers and we will attempt to make reasonable accommodations if possible.
> Janus Research Teams: Entities affiliated with the IARPA Janus program may participate in the DFW competition but are ineligible to receive any monetary prizes.
In case of any difficulties or questions, please email to email@example.com.
The prize award is supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA). The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.