×
验证码:
换一张
忘记密码?
记住我
CORC
首页
科研机构
检索
知识图谱
申请加入
托管服务
登录
注册
在结果中检索
科研机构
湖南大学 [7]
计算技术研究所 [6]
自动化研究所 [3]
计算机网络信息中心 [2]
清华大学 [1]
西安交通大学 [1]
更多...
内容类型
期刊论文 [23]
发表日期
2023 [1]
2022 [1]
2021 [1]
2020 [2]
2019 [8]
2018 [2]
更多...
×
知识图谱
CORC
开始提交
已提交作品
待认领作品
已认领作品
未提交全文
收藏管理
QQ客服
官方微博
反馈留言
浏览/检索结果:
共23条,第1-10条
帮助
限定条件
内容类型:期刊论文
已选(
0
)
清除
条数/页:
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
排序方式:
请选择
作者升序
作者降序
题名升序
题名降序
发表日期升序
发表日期降序
提交时间升序
提交时间降序
Parallel Learning for Legal Intelligence: A HANOI Approach Based on Unified Prompting
期刊论文
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2023, 页码: 11
作者:
Song, Zhuoyang
;
Huang, Min
;
Miao, Qinghai
;
Wang, Fei-Yue
收藏
  |  
浏览/下载:6/0
  |  
提交时间:2023/11/17
Index Ternis- Natural language processing (NLP)
parallel learning (PL)
parallel systems
pretrained language model (PLM)
prompt tuning
Accelerating temporal action proposal generation via high performance computing
期刊论文
Frontiers of Computer Science, 2022, 卷号: 16, 期号: 4, 页码: 10
作者:
T. Wang
;
S. Y. Lei
;
Y. Y. Jiang
;
C. Chang
收藏
  |  
浏览/下载:12/0
  |  
提交时间:2022/06/13
Why Dataset Properties Bound the Scalability of Parallel Machine Learning Training Algorithms
期刊论文
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 卷号: 32, 期号: 7, 页码: 1702-1712
作者:
Cheng, Daning
;
Li, Shigang
;
Zhang, Hanping
;
Xia, Fen
;
Zhang, Yunquan
收藏
  |  
浏览/下载:27/0
  |  
提交时间:2021/12/01
Training
Scalability
Machine learning
Machine learning algorithms
Stochastic processes
Task analysis
Upper bound
Parallel training algorithms
training dataset
scalability
stochastic optimization methods
WP-SGD: Weighted parallel SGD for distributed unbalanced-workload training system
期刊论文
JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING, 2020, 卷号: 145, 页码: 202-216
作者:
Cheng Daning
;
Li Shigang
;
Zhang Yunquan
收藏
  |  
浏览/下载:35/0
  |  
提交时间:2020/12/10
SGD
Unbalanced workload
SimuParallel SGD
Distributed system
Distributed machine learning load balancing strategy in cloud computing services
期刊论文
WIRELESS NETWORKS, 2020, 卷号: 26, 期号: 8, 页码: 5517-5533
作者:
Li, Mingwei
;
Zhang, Jilin
;
Wan, Jian
;
Ren, Yongjian
;
Zhou, Li
收藏
  |  
浏览/下载:40/0
  |  
提交时间:2020/12/10
Mobile service computing
Cloud service
Distributed machine learning
Load balancing
Adaptive fast reassignment
moDNN: Memory Optimal Deep Neural Network Training on Graphics Processing Units
期刊论文
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2019, 卷号: 30, 期号: 3, 页码: 646-661
作者:
Hu, Xiaobo Sharon
;
Han, Yinhe
;
Chen, Danny Ziyi
;
Chen, Xiaoming
收藏
  |  
浏览/下载:34/0
  |  
提交时间:2019/04/03
Deep neural networks
graphics processing units
memory usage
A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks.
期刊论文
IEEE Transactions on Parallel & Distributed Systems, 2019, 卷号: Vol.30 No.5, 页码: 965-976
作者:
Chen, Jianguo
;
Li, Kenli
;
Bilal, Kashif
;
Zhou, Xu
;
Li, Keqin
收藏
  |  
浏览/下载:6/0
  |  
提交时间:2019/12/13
Acceleration
bi-layered parallel computing
Big data
Computational modeling
Computer architecture
convolutional neural networks
deep learning
distributed computing
Parallel processing
Task analysis
Training
A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks
期刊论文
IEEE Transactions on Parallel and Distributed Systems, 2019, 卷号: Vol.30 No.5, 页码: 965-976
作者:
Jianguo Chen
;
Kenli Li
;
Kashif Bilal
;
xu zhou
;
Keqin Li
收藏
  |  
浏览/下载:9/0
  |  
提交时间:2019/12/13
Training
Computer architecture
Computational modeling
Parallel processing
Task analysis
Distributed computing
Acceleration
Big data
bi-layered parallel computing
convolutional neural networks
deep learning
distributed computing
A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks
期刊论文
IEEE Transactions on Parallel & Distributed Systems, 2019, 卷号: Vol.30 No.5, 页码: 965-976
作者:
Chen, JG
;
Li, KL
;
Bilal, K
;
Zhou, X
;
Li, KQ
收藏
  |  
浏览/下载:7/0
  |  
提交时间:2019/12/13
Training
Computer Architecture
Computational Modeling
Parallel Processing
Task Analysis
Distributed Computing
Acceleration
Big Data
Bi Layered Parallel Computing
Convolutional Neural Networks
Deep Learning
Distributed Computing
A Bi-layered Parallel Training Architecture for Large-Scale Convolutional Neural Networks.
期刊论文
IEEE Transactions on Parallel & Distributed Systems, 2019, 卷号: Vol.30 No.5, 页码: 965-976
作者:
Chen, Jianguo
;
Li, Kenli
;
Bilal, Kashif
;
Zhou, Xu
;
Li, Keqin
收藏
  |  
浏览/下载:3/0
  |  
提交时间:2019/12/17
Acceleration
bi-layered parallel computing
Big data
Computational modeling
Computer architecture
convolutional neural networks
deep learning
distributed computing
Parallel processing
Task analysis
Training
©版权所有 ©2017 CSpace - Powered by
CSpace