×
验证码:
换一张
忘记密码?
记住我
CORC
首页
科研机构
检索
知识图谱
申请加入
托管服务
登录
注册
在结果中检索
科研机构
湖南大学 [5]
计算技术研究所 [3]
内容类型
期刊论文 [8]
发表日期
2019 [8]
×
知识图谱
CORC
开始提交
已提交作品
待认领作品
已认领作品
未提交全文
收藏管理
QQ客服
官方微博
反馈留言
浏览/检索结果:
共8条,第1-8条
帮助
限定条件
发表日期:2019
内容类型:期刊论文
已选(
0
)
清除
条数/页:
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
排序方式:
请选择
作者升序
作者降序
题名升序
题名降序
发表日期升序
发表日期降序
提交时间升序
提交时间降序
moDNN: Memory Optimal Deep Neural Network Training on Graphics Processing Units
期刊论文
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2019, 卷号: 30, 期号: 3, 页码: 646-661
作者:
Hu, Xiaobo Sharon
;
Han, Yinhe
;
Chen, Danny Ziyi
;
Chen, Xiaoming
收藏
  |  
浏览/下载:34/0
  |  
提交时间:2019/04/03
Deep neural networks
graphics processing units
memory usage
A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks.
期刊论文
IEEE Transactions on Parallel & Distributed Systems, 2019, 卷号: Vol.30 No.5, 页码: 965-976
作者:
Chen, Jianguo
;
Li, Kenli
;
Bilal, Kashif
;
Zhou, Xu
;
Li, Keqin
收藏
  |  
浏览/下载:6/0
  |  
提交时间:2019/12/13
Acceleration
bi-layered parallel computing
Big data
Computational modeling
Computer architecture
convolutional neural networks
deep learning
distributed computing
Parallel processing
Task analysis
Training
A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks
期刊论文
IEEE Transactions on Parallel and Distributed Systems, 2019, 卷号: Vol.30 No.5, 页码: 965-976
作者:
Jianguo Chen
;
Kenli Li
;
Kashif Bilal
;
xu zhou
;
Keqin Li
收藏
  |  
浏览/下载:9/0
  |  
提交时间:2019/12/13
Training
Computer architecture
Computational modeling
Parallel processing
Task analysis
Distributed computing
Acceleration
Big data
bi-layered parallel computing
convolutional neural networks
deep learning
distributed computing
A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks
期刊论文
IEEE Transactions on Parallel & Distributed Systems, 2019, 卷号: Vol.30 No.5, 页码: 965-976
作者:
Chen, JG
;
Li, KL
;
Bilal, K
;
Zhou, X
;
Li, KQ
收藏
  |  
浏览/下载:7/0
  |  
提交时间:2019/12/13
Training
Computer Architecture
Computational Modeling
Parallel Processing
Task Analysis
Distributed Computing
Acceleration
Big Data
Bi Layered Parallel Computing
Convolutional Neural Networks
Deep Learning
Distributed Computing
A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks.
期刊论文
IEEE Transactions on Parallel & Distributed Systems, 2019, 卷号: Vol.30 No.5, 页码: 965-976
作者:
Chen, Jianguo
;
Li, Kenli
;
Bilal, Kashif
;
Zhou, Xu
;
Li, Keqin
收藏
  |  
浏览/下载:3/0
  |  
提交时间:2019/12/17
Acceleration
bi-layered parallel computing
Big data
Computational modeling
Computer architecture
convolutional neural networks
deep learning
distributed computing
Parallel processing
Task analysis
Training
A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks
期刊论文
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2019, 卷号: Vol.30 No.5, 页码: 965-976
作者:
Chen, JG
;
Li, KL
;
Bilal, K
;
Zhou, X
;
Li, KQ
收藏
  |  
浏览/下载:8/0
  |  
提交时间:2019/12/17
Big data
bi-layered parallel computing
convolutional neural networks
deep learning
distributed computing
Parameter Communication Consistency Model for Large-Scale Security Monitoring Based on Mobile Computing
期刊论文
IEEE ACCESS, 2019, 卷号: 7, 页码: 171884-171897
作者:
Yang, Rui
;
Zhang, Jilin
;
Wan, Jian
;
Zhou, Li
;
Shen, Jing
收藏
  |  
浏览/下载:2/0
  |  
提交时间:2020/12/10
Mobile computing
security monitoring
distributed machine learning
limited synchronous parallel model
parameter server
Model Aggregation Method for Data Parallelism in Distributed Real-Time Machine Learning of Smart Sensing Equipment
期刊论文
IEEE ACCESS, 2019, 卷号: 7, 页码: 172065-172073
作者:
Fan, Yuchen
;
Zhang, Jilin
;
Zhao, Nailiang
;
Ren, Yongjian
;
Wan, Jian
收藏
  |  
浏览/下载:1/0
  |  
提交时间:2020/12/10
Distributed machine learning
stochastic gradient descent
model aggregation method
smart sensing equipment
©版权所有 ©2017 CSpace - Powered by
CSpace