ms_yan
36a8886ca2
Revert "[feat] [assistant] [I3T96T] add new Dataset operator CMUARCTICDataset"
This reverts commit b077aa1cab .
Revert "[feat] [assistant] [I3T96X] add new Dataset operator LibriSpeechDataset"
This reverts commit 4e6f7dc97d .
delete pass_registry_test.cc
comment hiai_nlu_model_multi.pb related line
4 years ago
djc
b077aa1cab
[feat] [assistant] [I3T96T] add new Dataset operator CMUARCTICDataset
4 years ago
djc
4e6f7dc97d
[feat] [assistant] [I3T96X] add new Dataset operator LibriSpeechDataset
4 years ago
i-robot
415275ae17
!21805 support adafactor model parallel
Merge pull request !21805 from yangzhenzhang/auto-parallel-support-adafactor-opt
4 years ago
yangzhenzhang
7ca64d2235
auto parallel support adafactor opt
4 years ago
yao_yf
5277b229be
add cell comm recompute interface
4 years ago
yao_yf
1203413d32
pangu train and eval
4 years ago
yao_yf
dc7dc7d3fa
dataset strategy set
4 years ago
i-robot
a633405df1
!19448 strategy_ckpt_add_opt_param
Merge pull request !19448 from yao_yf/strategy_ckpt_add_opt_param
4 years ago
lichenever
108967ff7d
fix_pipeline_split_bug
4 years ago
yao_yf
d02a0d7914
strategy ckpt add opt param
4 years ago
lichenever
db5d508356
pipeline_split_adapt_master
4 years ago
Ziyan
2a752f24bf
enable not fully use opt shard
5 years ago
Xiaoda Zhang
5fecfe92a6
code style warnings fixing
4 years ago
yao_yf
21276408b8
parallel virtual_out_ops
5 years ago
yao_yf
17354e3c4e
fix find nodes with param
5 years ago
yangzhenzhang
bcd2ecc403
check layouts for shared parameter
5 years ago
yangzhenzhang
a70d616841
mini step grad accumulation
5 years ago
yangzhenzhang
7303c3d3b8
add group ckpt
5 years ago
yangzhenzhang
9da3f9bec9
mini step grad accumulation
5 years ago
yao_yf
19fe28cb9b
hange strategys of last nodes in eval/predict at auto parallel mode
5 years ago
Xiaoda Zhang
14d4926cf0
simplifying step-auto-parallel
5 years ago
lichenever
78e131cf15
pipeline_split adapt parallel
5 years ago
Yi Huaijie
d7faa77b5e
support int64 shape
5 years ago
lichenever
7c7006f347
fix bug if input not used
5 years ago
Ziyan
c33f2cd796
fix auto optimizer weight shard
5 years ago
yangzhenzhang
92d02b7aff
add recursion limit
5 years ago
yao_yf
65d8e63580
set last node data parallel or repeat calculate in eval/predict
5 years ago
Ziyan
069318899a
refactor get cnode strategy
5 years ago
Xiaoda Zhang
fba2bfeb54
overwrite strategies for star graph structure
5 years ago
Ziyan
ddc0113058
enable parallel optimizer in auto parallel
5 years ago
lichenever
d4bba3f1d2
fix_auto_parallel_find_loss_bug
5 years ago
lichenever
6b2a9de09f
fix auto parallel mutigrpah bug
5 years ago
yao_yf
05c003ae6b
origin/semi_auto_parallel_reshape_parameter_has_another_user
5 years ago
yangzhenzhang
fbda03bbcc
check parameter split
5 years ago
yao_yf
eeede168fa
wide_and_deep merge ckpt in eval
5 years ago
zhousiyi
d0e58dd765
remove ccsrc/common.h
replace frontend/operator/ops.h in backend with base/core_ops.h as
backend should not use any frontend-only primitive
5 years ago
yangzhenzhang
f4bb43bbaf
add concat op
5 years ago
yao_yf
60a9fb0001
add_tensor_layout_in_stra_ckpt
5 years ago
liubuyu
76dc80e7b7
Unified code style
5 years ago
liubuyu
43c79eb853
mindspore path adjust
5 years ago