Maneet Singh (IIIT-Delhi, India); Mohit Chawla (IIIT Delhi); Richa Singh (IIIT-Delhi); Mayank Vatsa (IIIT-Delhi); Rama Chellappa (University of Maryland)
Ming-Yu Liu is a distinguished research scientist at NVIDIA Research. Before joining NVIDIA in 2016, he was a principal research scientist at Mitsubishi Electric Research Labs (MERL). He earned his Ph.D. from the Department of Electrical and Computer Engineering at the University of Maryland College Park in 2012 under the supervision of Prof. Rama Chellappa. He received the R&D 100 Award by R&D Magazine for his robotic bin picking system in 2014. His semantic image synthesis paper and scene understanding paper are in the best paper finalist in the 2019 CVPR and 2015 RSS conferences, respectively. In SIGGRAPH 2019, he won the Best in Show Award and Audience Choice Award in the Real-Time Live show for his image synthesis work. His research focus is on generative image modeling. His goal is to enable machines human-like imagination capability.
Jiankang Deng (Imperial College London); Stefanos Zafeiriou (Imperial College London)
Taewook Kim (POSTECH); YongHyun Kim (Kakao Corp.); Inhan Kim (POSTECH); Daijin Kim (Pohang University of Science and Technology)
Arulkumar Subramaniam (Indian Institute of Technology Madras); Ajay Narayanan (Indian Institute of Information Technology Design & Manufacturing Kancheepuram); Anurag Mittal (Indian Institute of Technology Madras)
Evgeny Smirnov (Speech Technology Center); Andrei Oleinik (Speech Technology Center); Aleksandr Lavrentev (Speech Technology Center); Elizaveta Shulga (Speech Technology Center); Vasiliy Galyuk (Speech Technology Center); Nikita Garaev (ITMO University); Margarita Zakuanova (Speech Technology Center); Aleksandr Melnikov (ITMO University)
Lei Jiang (Jiangnan university); Xiaojun Wu (Jiangnan university); Josef Kittler (University of Surrey, UK)
Ifeoma Nwogu ( Rochester Institute of Technology); Geeta Madhav Gali (Rochester Institute of Technology)
Jiafeng Cheng (Xidian University); Deyan Xie (Xidian University); Wei Xia (Xidian University); Huanhuan Lian (Xidian University); Quanxue Gao (Xidian University)
Extensive research in the domain of face recognition has resulted in the development of algorithms achieving near cent percent performance on large-scale datasets. It has often been observed that most of these systems are susceptible to physical adversaries, that is, variations brought to the individual prior to the capture of input data. One of the most common, yet challenging and unexplored form of physical adversary is the incorporation of disguise accessories. Disguised face recognition encompasses handling both intentional and unintentional disguises, a research challenge pertinent for users such as the law enforcement agencies. Disguise as a covariate can be used to either obfuscate his/her identity or impersonate someone else’s identity. The problem is further aggravated in unconstrained environments, or “in the wild” scenarios.
The organizers invite researchers to participate in the Disguised Faces in the Wild 2019 competition and submit their research papers in the workshop.
The scope of the workshop is:
- Face recognition with disguise and spoofing variations
- Methods for impersonating identities using disguise
- Methods for detecting disguise and spoofing variations
- Face recognition with makeup variations
- Recognizing partially occluded faces
- Matching faces across plastic surgery
- Workshop paper submission deadline: 11:59 P.M. PST, August 12th, 2019
- Camera ready deadline: 11:59 P.M. PST, August 25th, 2019
- Authors are required to follow the submission guidelines of ICCV2019 for paper submission. The templates are available at their website . Papers must be limited to eight pages, with the additional pages containing only references.
- The workshop proceedings will be available at IEEE Xplore Digital Library and CVF Open Access.
Paper submission will be handled via the workshop's CMT site: https://cmt3.research.microsoft.com/DFW2019/