[[Conference Version]] Available after Camera Ready [[Arxiv]] Available Soon [[Code]] Available Soon [[Poster]] Available Soon
Domain adaptation has become a resounding success in leveraging labeled data from a source domain to learn an accurate classifier for an unlabeled target domain. When deployed in the wild, the target domain usually contains unknown classes that are not shared by the source domain. Such setting is termed Open Set Domain Adaptation (OSDA). While several methods have been proposed to address OSDA, none of them takes into account the openness of the target domain, which is measured by the proportion of unknown classes in all target classes. The openness is a critical point in open set domain adaptation and exerts a significant impact on performance. In addition, current work aligns the entire target domain with the source domain without excluding unknown samples, which may give rise to negative transfer due to the mismatch between unknown and known classes. To this end, this paper presents Separate to Adapt (STA), an end-to-end approach to open set domain adaptation. The approach adopts a coarse-to-fine weighting mechanism to progressively separate the samples of unknown and known classes, and simultaneously weight their importance on feature distribution alignment. Our approach allows opennessagnostic open set domain adaptation, which is robust to a variety of openness in the target domain. We evaluate STA on several benchmark datasets of different openness levels and verify that STA significantly outperforms previous methods.