CORC  > 北京大学  > 信息科学技术学院
Long short-term memory based convolutional recurrent neural networks for large vocabulary speech recognition
Li, Xiangang ; Wu, Xihong
2015
英文摘要Long short-term memory (LSTM) recurrent neural networks (RNNs) have been shown to give state-of-the-art performance on many speech recognition tasks, as they are able to provide the learned dynamically changing contextual window of all se- quence history. On the other hand, the convolutional neural net- works (CNNs) have brought significant improvements to deep feed-forward neural networks (FFNNs), as they are able to bet- ter reduce spectral variation in the input signal. In this paper, a network architecture called as convolutional recurrent neural network (CRNN) is proposed by combining the CNN and L- STM RNN. In the proposed CRNNs, each speech frame, with- out adjacent context frames, is organized as a number of local feature patches along the frequency axis, and then a LSTM net- work is performed on each feature patch along the time axis. We train and compare FFNNs, LSTM RNNs and the proposed LST- M CRNNs at various number of configurations. Experimental results show that the LSTM CRNNs can exceed state-of-the-art speech recognition performance. Copyright ? 2015 ISCA.; EI; 3219-3223; 2015-January
语种英语
出处16th Annual Conference of the International Speech Communication Association, INTERSPEECH 2015
内容类型其他
源URL[http://ir.pku.edu.cn/handle/20.500.11897/436962]  
专题信息科学技术学院
推荐引用方式
GB/T 7714
Li, Xiangang,Wu, Xihong. Long short-term memory based convolutional recurrent neural networks for large vocabulary speech recognition. 2015-01-01.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace