Two-Stage Multi-Target Joint Learning for Monaural Speech Separation
Shuai, Nie1; Shan, Liang1; Wei, Xue1; XueLiang, Zhang2; WenJu, Liu1; Like Dong3; Hong Yang3
2015
会议名称Annual Conference of the International Speech Communication Association (INTERSPEECH)
会议日期2015
会议地点Dresden Germany
关键词speech separation multi-target learning computational auditory scene analysis (CASA)
页码1503-1507
英文摘要Recently, supervised speech separation has been extensively
studied and shown considerable promise. Due to the temporal
continuity of speech, speech auditory features and separation
targets present prominent spectro-temporal structures
and strong correlations over the time-frequency (T-F) domain,
which can be exploited for speech separation. However, many
supervised speech separation methods independently model
each T-F unit with only one target and much ignore these useful
information. In this paper, we propose a two-stage multi-target
joint learning method to jointly model the related speech separation
targets at the frame level. Systematic experiments show
that the proposed approach consistently achieves better separation
and generalization performances in the low signal-to-noise
ratio(SNR) conditions.
收录类别EI
会议录Annual Conference of the International Speech Communication Association (INTERSPEECH)
语种英语
内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/11024]  
专题自动化研究所_模式识别国家重点实验室_机器人视觉团队
作者单位1.National Laboratory of Patten Recognition, Institute of Automation, Chinese Academy of Sciences
2.College of Computer Science, Inner Mongolia University
3.Electric Power Research Institute of ShanXi Electric Power Company, China State Grid Corp
推荐引用方式
GB/T 7714
Shuai, Nie,Shan, Liang,Wei, Xue,et al. Two-Stage Multi-Target Joint Learning for Monaural Speech Separation[C]. 见:Annual Conference of the International Speech Communication Association (INTERSPEECH). Dresden Germany. 2015.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace