Xiaoda Zhang
04db51a528
In a previous PR ( https://gitee.com/mindspore/mindspore/pulls/26807/ ), we replaced 'auto_parallel_search_mode' by 'search_mode' directly.
However, to be forward compatitable, it is suitable to keep 'auto_parallel_search_mode' available. This PR recovers the 'auto_parallel_search_mode' interface and adds a warning when using this old interface.
This PR also deals with other codestyle things.
4 years ago
Xiaoda Zhang
ad5ac77ae8
1) 'auto_parallel_search_mode' changes to 'search_mode';
2) 'sharding_propagation' moves to 'search_mode';
4 years ago
huangxinjing
f354ab22a3
add pipeline shard interface
Add support for no pipeline accugradient
Add delay tag for fusion op
Optimizer the visite order
add mirror for mini step control
Move the group to attributes
Add gradient_shard control for the mini step
Fix code stype
Fix ut description
Add interface
4 years ago
b00518648
ea50695cae
pclint
4 years ago
yao_yf
dc7dc7d3fa
dataset strategy set
4 years ago
Xiaoda Zhang
bb5d4212f7
enable All2All in infering redistribution ops
4 years ago
Xiaoda Zhang
04381273b3
Add the sharding propagation function:
1) users configure sharding strategies for operators;
2) framework will propagate the strategies from configured-ops to
non-configured ops using BFS;
3) the propagation goal is to minimize redistribution communication
cost;
5 years ago
yao_yf
d2dc22ff71
parallel_weight_init_adapt_pipeline_increment_predict
4 years ago
kswang
3fbd8ce783
fix get allreduce fusion bug
4 years ago
i-robot
ada0d2ac96
!17721 optimize hccl group
Merge pull request !17721 from kisnwang/optimize-hccl-group
4 years ago
i-robot
b79400296a
!17959 fix optimizer weight shard config
Merge pull request !17959 from gziyan/add_optimizer_weight_shard_config
4 years ago
kswang
1d55f72bea
optimize exec order only for one hccl group
4 years ago
kswang
8aa0450b8d
set dtype for allreduce fusion
4 years ago
Ziyan
95ac0f6d58
fix optimizer weight shard config
4 years ago
Ziyan
2a752f24bf
enable not fully use opt shard
5 years ago
mindspore-ci-bot
7ba21f8d8c
!12900 Add communication parallel mode.
From: @liujunzhu
Reviewed-by: @zhoufeng54,@guoqi1024
Signed-off-by: @guoqi1024
5 years ago
liujunzhu
6541b96c40
Add communication parallel mode.
5 years ago
Ziyan
ec9793861f
fix grad accu
5 years ago
yangzhenzhang
a70d616841
mini step grad accumulation
5 years ago
yangzhenzhang
7303c3d3b8
add group ckpt
5 years ago
yangzhenzhang
9da3f9bec9
mini step grad accumulation
5 years ago
yangzhenzhang
278e82a849
update pipeline parallel
5 years ago
Yi Huaijie
d7faa77b5e
support int64 shape
5 years ago
Xiaoda Zhang
fba2bfeb54
overwrite strategies for star graph structure
5 years ago
huangxinjing
4ef439e27b
Add stage information for ops and strategy
5 years ago
huangxinjing
8ba1503135
Add default value for auto search parallel mode
5 years ago
yao_yf
d4cfe55c04
rename mirror_mean to gradients_mean
5 years ago
yao_yf
8f7aa5bd5a
auto parallel context modify
5 years ago
Yi Huaijie
394be43492
raise RuntimeError when set different mode after Initializer created
5 years ago
Yi Huaijie
89a4ebf8a1
parallel mode must be set before create an initializer
5 years ago
zhoufeng
663278112f
optimize code compile performance
Signed-off-by: zhoufeng <zhoufeng54@huawei.com>
5 years ago
liubuyu
d81862a916
decoupling core and context
5 years ago
Yi Huaijie
80bdcab982
temporarily cast between int64 and int32 to wait ME support int64
5 years ago
Yi Huaijie
518cb80133
change type of Shape from int32 to int64
5 years ago
suteng
19e45ccdb1
回退 'Pull Request !3103 : change type of Shape from int32 to int64'
5 years ago
Yi Huaijie
15d5cc396d
change type of Shape from int32 to int64
5 years ago
Ziyan
98e2ee90de
fix optimizer parallel problems
5 years ago
lichenever
e712c6cfe5
autoparallel support dataset in gpu
5 years ago
liubuyu
43c79eb853
mindspore path adjust
5 years ago