Relative Alignment Network for Source-Free Multimodal Video Domain Adaptation
Huang Yi3,4; Yang Xiaoshan2,3,4; Zhang Ji1; Xu Changsheng2,3,4
2022-10
会议日期2022.10.10—2022.10.14
会议地点Lisboa, Portugal
英文摘要

Video domain adaptation aims to transfer knowledge from labeled source videos to unlabeled target videos. Existing video domain adaptation methods require full access to the source videos to reduce the domain gap between the source and target videos, which are impractical in real scenarios where the source videos are not available with concerns in transmission efficiency or privacy issues. To address this problem, in this paper, we propose to solve a source-free domain adaptation task for videos where only a pre-trained source model and unlabeled target videos are available for learning a multimodal video classification model. Existing source-free domain adaptation methods cannot be directly applied to this task, since videos always suffer from domain discrepancy along both the multimodal and temporal aspects, which brings difficulties in domain adaptation especially when the source data are unavailable. In this paper, we propose a Multimodal and Temporal Relative Alignment Network (MTRAN) to deal with the above challenges. To explicitly imitate the domain shifts contained in the multimodal information and the temporal dynamics of the source and target videos, we divide the target videos into two splits according to the self-entropy values of the classification results. The low-entropy videos are deemed to be source-like while the high-entropy videos are deemed to be target-like. Then, we adopt a self-entropy-guided MixUp strategy to generate synthetic samples and hypothetical samples as instance-level based on source-like and target-like videos, and push each synthetic sample to be similar with the corresponding hypothetical sample that is slightly closer to the source-like videos than the synthetic sample by multimodal and temporal relative alignment schemes. We evaluate the proposed model on four public video datasets. The results show that our model outperforms existing state-of-the-art methods.

会议录MM '22: Proceedings of the 30th ACM International Conference on Multimedia
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/52094]  
专题自动化研究所_模式识别国家重点实验室_多媒体计算与图形学团队
通讯作者Xu Changsheng
作者单位1.DAMO Academy, Alibaba Group
2.Peng Cheng Laboratory
3.School of Artificial Intelligence, University of Chinese Academy of Sciences
4.Institute of Automation, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Huang Yi,Yang Xiaoshan,Zhang Ji,et al. Relative Alignment Network for Source-Free Multimodal Video Domain Adaptation[C]. 见:. Lisboa, Portugal. 2022.10.10—2022.10.14.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace