Workshop on Domain Adaptation for Visual Understanding (DAVU)

Joint IJCAI/ECAI/AAMAS/ICML 2018 Workshop

Visual understanding is a fundamental cognitive ability in humans which is essential for identifying objects/people and interacting in social space. This cognitive skill makes interaction with the environment extremely effortless and provides an evolutionary advantage to humans as a species. In our daily routines, we, humans, not only learn and apply knowledge for visual recognition,we also have intrinsic abilities of transferring knowledge between related visual tasks, i.e., if the new visual task is closely related to the previous learning, we can quickly transfer this knowledge to perform the new visual task. In developing machine learning based automatedvisual recognition algorithms, it is desired to utilize these capabilities to make the algorithms adaptable. Generally traditional algorithms, given some prior knowledge in a related visual recognition task, do not adapt to a new task and have to learn the new task from the beginning. These algorithms do not consider that the two visual tasks may be related and the knowledge gained in one may be used to learn the new task efficiently in lesser time. Domain adaptation for visual understanding is the area of research, which attempts to mimic this human behavior by transferring the knowledge learned in one or more source domains and use it for learning the related visual processing task in target domain. Recent advances in domain adaptation, particularly in co-training, transfer learning, and online learning have benefited the computer vision significantly. For example, learning from high-resolution source domain images and transferring the knowledge to learning low-resolution target domain information has helped in building improved cross-resolution face recognition algorithms. This special issue will focus on the recent advances on domain adaptation for visual recognition. The organizers invite researchers to participate and submit their research papers in the Domain Adaptation workshop. Topics of interest include but are not limited to:
  1. Novel algorithms for visual recognition using
    1. Co-training
    2. Transfer learning
    3. Online (incremental/decremental) learning
    4. Covariate shift
    5. Heterogeneous domain adaptation
    6. Dataset bias

  1. Domain adaptation in visual representation learning using
    1. Deep learning
    2. Shared representation learning
    3. Online (incremental/decremental) learning
    4. Multimodal learning
    5. Evolutionary computation-based domain adaptation algorithms

  1. Applications in computer vision such as
    1. Object recognition
    2. Biometrics
    3. Hyper-spectral
    4. Surveillance
    5. Road transportation
    6. Autonomous driving
Submission Format: The authors should follow IJCAI paper preparation instructions, including page length (e.g. 6 pages + 1 extra page for reference).

Important Dates:
Submission deadline: May 10, 2018
Decision notification: May25, 2018

Paper Submission Page:


Richa Singh, IEEE Senior Member
IIIT Delhi, India and West Virginia University, USA

Mayank Vatsa, IEEE Senior Member
IIIT Delhi, India and West Virginia University, USA

Vishal M. Patel, IEEE Senior Member
Rutgers University, USA

Nalini Ratha, IEEE Fellow
IBM TJ Watson Research Center, USA

Program details

Session1 (14:00-15:30)

14:00– 14:10 – Welcome

Domain Adaptation with Deep Metric Learning: Issam Hadj Laradji, University of British Columbia (UBC); Reza Babanezhad, UBC
14:30– 14:50
On Minimum Discrepancy Estimation for Deep Domain Adaptation: Mohammad Mahfujur Rahman, Queensland University of Technology; Clinton Fookes, Queensland University of Technology; Mahsa Baktashmotlagh, QUT; Sridha Sridharan, QUT
14:50– 15:10
Intuition Learning: Anush Sankaran*, IBM; Mayank Vatsa, IIIT-Delhi; Richa Singh, IIIT-Delhi
15:10– 15:30
XGAN: Unsupervised image-to-image translation for many-to-many mappings: Amélie Royer, IST Austria; Konstantinos Bousmalis, DeepMind; Stephan Gouws, Google Brain; Fred Bertsch, Google; Inbar Mosseri, Google; Forrester Cole, Google Research; Kevin Murphy, Google
15:30– 16:00 Coffee break

Session2 (16:00-18:00)

16:00– 16:20
Improving Transferability of Deep Neural Networks: Parijat Dube, IBM Research; Bishwaranjan Bhattacharjee, IBM Research; Elisabeth Petit-Bios, IBM; Matthew Hill, IBM
16:20– 16:40
Multi-modal Conditional Feature Enhancement for Facial Action Unit Recognition: Nishant Sankaran, University at Buffalo; Deen Mohan, University at Buffalo; Nagashri Lakshminarayana, University at Buffalo; Srirangaraj Setlur, University at Buffalo, SUNY; Venu Govindaraju, University at Buffalo, SUNY
16:40– 17:00
Cross Modality Video Segment Retrieval with Ensemble Learning: Xinyan Yu, Shanghai Jiaotong University
17:00– 17:20
Alleviating Tracking Model Degradation using Interpolation based Progressive Updating: Xiyu Kong, Shanghai Jiao Tong University; Qiping Zhou, State Grid Jiangxi Power Co.Ltd; Yunyu Lai, State Grid Jiangxi Power Co.Ltd; Muming Zhao, Shanghai Jiao Tong University; Chongyang Zhang, Shanghai Jiao Tong University