ms_yan
36a8886ca2
Revert "[feat] [assistant] [I3T96T] add new Dataset operator CMUARCTICDataset"
This reverts commit b077aa1cab .
Revert "[feat] [assistant] [I3T96X] add new Dataset operator LibriSpeechDataset"
This reverts commit 4e6f7dc97d .
delete pass_registry_test.cc
comment hiai_nlu_model_multi.pb related line
4 years ago
djc
b077aa1cab
[feat] [assistant] [I3T96T] add new Dataset operator CMUARCTICDataset
4 years ago
djc
4e6f7dc97d
[feat] [assistant] [I3T96X] add new Dataset operator LibriSpeechDataset
4 years ago
i-robot
415275ae17
!21805 support adafactor model parallel
Merge pull request !21805 from yangzhenzhang/auto-parallel-support-adafactor-opt
4 years ago
i-robot
a77a0b968d
!21761 comm_recompute_interface.
Merge pull request !21761 from yao_yf/comm_recompute_interface
4 years ago
yangzhenzhang
7ca64d2235
auto parallel support adafactor opt
4 years ago
yao_yf
5277b229be
add cell comm recompute interface
4 years ago
yao_yf
e233880e41
fix reshape depend reshape in auto parallel
4 years ago
yao_yf
1203413d32
pangu train and eval
4 years ago
yao_yf
dc7dc7d3fa
dataset strategy set
4 years ago
Xiaoda Zhang
bb5d4212f7
enable All2All in infering redistribution ops
4 years ago
lichenever
3c7cfb7c08
auto_parallel_support_control_flow
4 years ago
ZPaC
a9a0f590e6
Fix master static check
4 years ago
lichenever
f797190fb5
fix_Pipeline_parallel_bug
4 years ago
yao_yf
4aae231a8a
fix_pipeline_opt_shard
4 years ago
lichenever
cb91e606ac
fix_auto_parallel_bug
4 years ago
i-robot
a633405df1
!19448 strategy_ckpt_add_opt_param
Merge pull request !19448 from yao_yf/strategy_ckpt_add_opt_param
4 years ago
lichenever
108967ff7d
fix_pipeline_split_bug
4 years ago
yao_yf
d02a0d7914
strategy ckpt add opt param
4 years ago
lichenever
db8850a4a3
pipeline_support_predict_master
4 years ago
Ziyan
be1f5a43d7
opt shard fit micro batch
4 years ago
yangzhenzhang
69acf757d0
add parallel op for conv2d backprop input
4 years ago
lichenever
744e4cbab5
change_pipeline_shared_param
4 years ago
yangzhenzhang
af0d28de48
add parallel op for batchnorm
4 years ago
lichenever
cb438ce350
rectification_log
4 years ago
lichenever
db5d508356
pipeline_split_adapt_master
4 years ago
Ziyan
88be613cdc
optimizer weight shard mix precision optimization and finetune
4 years ago
yao_yf
66da2588db
parallel_replace_operator_set_attrs_fix
4 years ago
mindspore-ci-bot
1c8fda25ef
!16478 handle load op in step parallel
From: @gong_zi_yan
Reviewed-by: @yangzhenzhang,@stsuteng
Signed-off-by: @stsuteng
4 years ago
Ziyan
4b17493e52
handle load in step parallel
4 years ago
Ziyan
6b74db118d
fix param tensor layout when run opt shard with accu grad
4 years ago
Ziyan
2a752f24bf
enable not fully use opt shard
5 years ago
ZPaC
12f95b51f4
Add server code part2
4 years ago
mindspore-ci-bot
a2a24f7833
!15810 fix gather_p_info judgement
From: @yao_yf
Reviewed-by: @stsuteng,@yangzhenzhang
Signed-off-by: @stsuteng
4 years ago
Ziyan
3a11b8b39c
fix accu grads shape when enable opt shard
4 years ago
yao_yf
61ef56a26c
fix gather_p_info judgements
4 years ago
Xiaoda Zhang
5fecfe92a6
code style warnings fixing
4 years ago
yao_yf
093ef784de
dont insert virtualoutput for scalar
4 years ago
mindspore-ci-bot
3cfd58e8e0
!15643 insert virtual div only for first input of dropout do mask
From: @yangzhenzhang
Reviewed-by: @stsuteng,@kisnwang
Signed-off-by: @stsuteng
4 years ago
yangzhenzhang
5828973978
fix bug for dropout do mask
4 years ago
yao_yf
21276408b8
parallel virtual_out_ops
5 years ago
yao_yf
17354e3c4e
fix find nodes with param
4 years ago
yangzhenzhang
bcd2ecc403
check layouts for shared parameter
5 years ago
yao_yf
d7641123bb
strategy_ckpt_file_adapt_optimizer_shard
5 years ago
yangzhenzhang
689e50a3d0
fix grad accu bug for no used parameter
5 years ago
mindspore-ci-bot
29bf2909b2
!13105 insert mirror before load
From: @yangzhenzhang
Reviewed-by:
Signed-off-by:
5 years ago
yangzhenzhang
6eadd241a0
insert mirror before load
5 years ago
Ziyan
4109308e34
insert parallel optimizer once
5 years ago
Ziyan
ec9793861f
fix grad accu
5 years ago
mindspore-ci-bot
7fcce73c51
!12700 add grad accumulation combined with optimizer parallel
From: @yangzhenzhang
Reviewed-by:
Signed-off-by:
5 years ago