mindspore-ci-bot
|
a8738e6038
|
!8997 close strategy ckpt stream
From: @yao_yf
Reviewed-by: @stsuteng,@kisnwang
Signed-off-by: @stsuteng
|
5 years ago |
yao_yf
|
e5efdb9571
|
close_strategy_ckpt_stream
|
5 years ago |
ZPaC
|
db3a2d60cb
|
GPU supports p2p nccl interfaces
|
5 years ago |
mindspore-ci-bot
|
1ee1916828
|
!8900 [AutoParallel]Add receive eliminate pass
From: @lichen666
Reviewed-by: @yangzhenzhang,@chenfei52,@stsuteng
Signed-off-by: @stsuteng
|
5 years ago |
mindspore-ci-bot
|
729801f8fa
|
!8851 [AutoParallel] restraint parallelable dimension of the Softmax operator
From: @ch-l
Reviewed-by: @stsuteng
Signed-off-by: @stsuteng
|
5 years ago |
lichenever
|
ee34ae9259
|
add_receive_eliminate_pass
|
5 years ago |
sheng
|
02a08227f6
|
add axis-related op strategy control
|
5 years ago |
lichenever
|
ee2478c05d
|
change send_recv to inner
|
5 years ago |
huangxinjing
|
89e7778497
|
Add UnsortedSegmentMax Operation
|
5 years ago |
huangxinjing
|
8129806475
|
Add slice parallel op
|
5 years ago |
mindspore-ci-bot
|
ee72de1db2
|
!8441 Add Parallel Implements of UniformCandidateSampler
From: @huangxinjing
Reviewed-by:
Signed-off-by:
|
5 years ago |
yangzhenzhang
|
278e82a849
|
update pipeline parallel
|
5 years ago |
huangxinjing
|
2730cef047
|
Uniform Sampler Base Update
|
5 years ago |
mindspore-ci-bot
|
e1cfeeb1dd
|
!8644 update getting device list in parallel ops
From: @yangzhenzhang
Reviewed-by: @stsuteng,@kisnwang
Signed-off-by: @stsuteng
|
5 years ago |
mindspore-ci-bot
|
56a731e48a
|
!8406 [AutoParallel]Add auto parallel pipeline
From: @lichen666
Reviewed-by:
Signed-off-by:
|
5 years ago |
lichenever
|
2e1c43483e
|
add auto parallel pipeline
|
5 years ago |
mindspore-ci-bot
|
610f06b92d
|
!8642 Keep debug info. and trace info. after Grad Operation.
From: @zh_qh
Reviewed-by:
Signed-off-by:
|
5 years ago |
Zhang Qinghua
|
bcfa1f72b1
|
Add debug and trace info for grad operation.
|
5 years ago |
yangzhenzhang
|
d4d6c4beae
|
update get device list in parallel ops
|
5 years ago |
mindspore-ci-bot
|
2bf165c0b4
|
!8557 run cast before allgather in parallel optimzier
From: @gong_zi_yan
Reviewed-by:
Signed-off-by:
|
5 years ago |
yangzhenzhang
|
0c2c76d037
|
update get rank in parallel ops
|
5 years ago |
mindspore-ci-bot
|
2c7eb58752
|
!8505 adapt implicit conversion to the change that python int resolved as int64 instead of int32
From: @zhangbuxue
Reviewed-by: @chenfei52,@zh_qh
Signed-off-by: @zh_qh
|
5 years ago |
Ziyan
|
0ddb754edb
|
run cast before parallel optimizer
|
5 years ago |
mindspore-ci-bot
|
7aa7ece5ce
|
!8497 add stage id for pipeline parallel
From: @yangzhenzhang
Reviewed-by: @stsuteng,@kisnwang
Signed-off-by: @stsuteng
|
5 years ago |
buxue
|
ecf0ec46fa
|
adapt implicit conversion to the change that python int resolved as int64 instead of int32
|
5 years ago |
yangzhenzhang
|
41d925b68a
|
set stage id
|
5 years ago |
huangxinjing
|
3e9fac7f59
|
Fix repeat error
|
5 years ago |
Ziyan
|
17f2e6e756
|
fix api error and get next info
|
5 years ago |
mindspore-ci-bot
|
4e07f43dff
|
!8323 [Auto parallel] Supporting for-loop in strategy-searching
From: @xiaoda_zh
Reviewed-by:
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
5a4af56c15
|
!8322 add parallel op for Range
From: @yangzhenzhang
Reviewed-by: @xiaoda_zh,@cristoval
Signed-off-by:
|
5 years ago |
Xiaoda Zhang
|
aa13d6b1cd
|
support for-loop in auto-parallel
|
5 years ago |
mindspore-ci-bot
|
244b7034e8
|
!7926 [ME][OptPass]fix bug when eliminate unused parameter in 'inline' pass
From: @chenfei52
Reviewed-by:
Signed-off-by:
|
5 years ago |
yangzhenzhang
|
9747bde861
|
add range op
|
5 years ago |
chenfei
|
e41c304b3e
|
dump ir of subpass of every substitution
add cache for transformed graph in inline
|
5 years ago |
mindspore-ci-bot
|
dbcdda18ed
|
!8293 Resolve specialize error during transforming after block in PyNative mode.
Merge pull request !8293 from 张清华/grad_opt2
|
5 years ago |
Zhang Qinghua
|
4e6e68f187
|
Resolve specialize error during transforming after block in PyNative mode.
|
5 years ago |
yangzhenzhang
|
0a79ab82ae
|
add parallel ops
|
5 years ago |
Yi Huaijie
|
d7faa77b5e
|
support int64 shape
|
5 years ago |
Xiaoda Zhang
|
aa84484049
|
enabling approximation in DP algorithms
|
5 years ago |
mindspore-ci-bot
|
9f4ea4bbb0
|
!8223 [AutoParallel]fix bug if input is not used
Merge pull request !8223 from lichen/fix_bug_if_input_not_used
|
5 years ago |
lichenever
|
7c7006f347
|
fix bug if input not used
|
5 years ago |
mindspore-ci-bot
|
e62c116277
|
!8163 Add irpass related to PrimTile.
Merge pull request !8163 from huangbingjian/add_irpass_tile
|
5 years ago |
mindspore-ci-bot
|
a07250e8b0
|
!8049 [ME][OptPass]fix bug when eliminate tuple getitem after switch
Merge pull request !8049 from chenfei_mindspore/fix-bug-of-corporate-switch-pass
|
5 years ago |
HuangBingjian
|
0951cd6230
|
add IR pass {PrimTile, X, Empty} -> X
|
5 years ago |
mindspore-ci-bot
|
e909b9077c
|
!7946 fix auto parallel optimizer weight shard
Merge pull request !7946 from gziyan/fix_auto_optim_shard
|
5 years ago |
mindspore-ci-bot
|
33aa2ae16b
|
!8063 fix ReLUV2 mask error
Merge pull request !8063 from yihuaijie/dev
|
5 years ago |
Ziyan
|
c33f2cd796
|
fix auto optimizer weight shard
|
5 years ago |
yao_yf
|
e76c8b708d
|
auto_parallel_dynamic_shape_supplements
|
5 years ago |
Yi Huaijie
|
94f9f0ef64
|
fix reluv2 mask error
|
5 years ago |
chenfei
|
5d0b3597ff
|
incorporate switch pass should handle mutiple getitem
|
5 years ago |