yangzhenzhang
|
6b54a6417d
|
ckpt and restore parameter shape
|
6 years ago |
kswang
|
362bbacf19
|
add group for allreduce fusion
|
6 years ago |
mindspore-ci-bot
|
e42631c127
|
!1172 [AutoParallel] Elementwise operators implicit semantics handling by rec's parser
Merge pull request !1172 from Chong/support_squeeze_and_reduce
|
6 years ago |
mindspore-ci-bot
|
d11dc8276d
|
!1181 fix gatherv2 replace graph in auto parallel
Merge pull request !1181 from yao_yf/fix_gather_v2_replace_graph
|
6 years ago |
hongxing
|
8f04adf1c3
|
feature : eliminate graph
|
6 years ago |
yao_yf
|
06d35d8d18
|
fix gatherv2 replace graph in auto parallel
|
6 years ago |
mindspore-ci-bot
|
b124bf38a1
|
!1152 [AutoParallel] dynamic output shape handling for Reduce series & Squeeze
Merge pull request !1152 from Chong/support_squeeze_and_reduce
|
6 years ago |
hongxing
|
dc290d7959
|
support squeeze and reduce op
|
6 years ago |
mindspore-ci-bot
|
a6546b80ba
|
!1147 INFO user when set_strategy not under [semi_]auto_parallel mode
Merge pull request !1147 from yihuaijie/master
|
6 years ago |
Yi Huaijie
|
75ca84d260
|
INFO user when set_strategy not under [semi_]auto_parallel mode
|
6 years ago |
lichenever
|
debfd38b75
|
fix gatherv2 and dataset bug
|
6 years ago |
Xiaoda Zhang
|
a05aa21cc2
|
calculating PEAK memory cost in the inference phase
|
6 years ago |
mindspore-ci-bot
|
7dc31684b6
|
!1073 fix reshape tensor redistribution bug
Merge pull request !1073 from yao_yf/reshape_bug_fix
|
6 years ago |
yao_yf
|
b0921c15e9
|
xreshape tensor_redistrinution bug fix
|
6 years ago |
mindspore-ci-bot
|
2af6ee2482
|
!1054 Add slice shape for param info
Merge pull request !1054 from yangzhenzhang/add-slice-shape-for-param-info
|
6 years ago |
fary86
|
3f323e48e2
|
Add submodule id for log interface
|
6 years ago |
yangzhenzhang
|
05fde3d23d
|
add slice shape for param info
|
6 years ago |
mindspore-ci-bot
|
95d4665db9
|
!1051 auto parallel reshape reconstruct
Merge pull request !1051 from yao_yf/auto_parallel_reshape_reconstruct
|
6 years ago |
mindspore-ci-bot
|
2dd97a0780
|
!1028 [AutoParallel] refined memory check & added odd num tensor shape support
Merge pull request !1028 from Chong/memctl
|
6 years ago |
yao_yf
|
716def7c0a
|
move InferStraByTensorInfo to tensor_info.h
|
6 years ago |
mindspore-ci-bot
|
dd2062bf8d
|
!1023 add_gatherv2_distributed_op
Merge pull request !1023 from lichen/add_gatherv2_distributed_op
|
6 years ago |
lichenever
|
19a24b86ac
|
add gatherv2 distributed op
|
6 years ago |
yao_yf
|
f0bf438a55
|
reshape strategy search
|
6 years ago |
sheng
|
59bb014144
|
refine mem ctl & odd num ctl & fuse str writeback
|
6 years ago |
yangzhenzhang
|
8c9730b3c5
|
add parallel mode for cell
|
6 years ago |
Xiaoda Zhang
|
def8573275
|
implementing-searching-strategy-for-inference
|
6 years ago |
mindspore-ci-bot
|
eb76956dcb
|
!897 [AutoParallel] Adjustements w.r.t distributed execution
Merge pull request !897 from Chong/avril
|
6 years ago |
ch-l
|
caac6bce5c
|
adjustements w.r.t. distributed execution
|
6 years ago |
yao_yf
|
5a6540450e
|
use param name as the key of strategy checkpoint
|
6 years ago |
mindspore-ci-bot
|
69ab46e624
|
!727 [AutoParallel] complete cost for recursive programming
Merge pull request !727 from Chong/cost
|
6 years ago |
mindspore-ci-bot
|
21d936e656
|
!728 auto parallel strategy checkpoint full
Merge pull request !728 from yao_yf/strategy_checkpoint_extend
|
6 years ago |
ch-l
|
309060b1c2
|
complet cost models
|
6 years ago |
yao_yf
|
6cde5f6d91
|
auto parallel strategy checkpoint
|
6 years ago |
mindspore-ci-bot
|
5b3327d103
|
!746 reducescatter backforward operator
Merge pull request !746 from lirongzhen1/bp_reducescatter
|
6 years ago |
lirongzhen1
|
0b4648881b
|
add reducescatter bprop
|
6 years ago |
mindspore-ci-bot
|
63712848e2
|
!494 Split ccsrc cmake to individual sub-directories
Merge pull request !494 from zhoufeng/cmake-sub
|
6 years ago |
Xiaoda Zhang
|
e227415673
|
support-the-multiple-subgraphs-in-the-ANF
|
6 years ago |
zhoufeng
|
b681cec8f2
|
cmake refactor
|
6 years ago |
yangzhenzhang
|
4750861054
|
fix layernorm bug
|
6 years ago |
yangzhenzhang
|
36a62576e8
|
support forward graph
|
6 years ago |
mindspore-ci-bot
|
ce71c17933
|
!645 auto parallel prelu operator support broadcast
Merge pull request !645 from yao_yf/auto_parallel_prelu_support_broadcast
|
6 years ago |
mindspore-ci-bot
|
84d5e4f923
|
!643 [AutoParallel]Support reshape parameter
Merge pull request !643 from lichen/support_reshape_parameter
|
6 years ago |
mindspore-ci-bot
|
00859ae119
|
!586 enable/disable allreduce_fusion
Merge pull request !586 from lirongzhen1/allreduce_fusion
|
6 years ago |
lichenever
|
2ab211ae04
|
support reshape parameter
|
6 years ago |
yao_yf
|
425276d43d
|
auto parallel prelu support prelu
|
6 years ago |
lirongzhen
|
4ff418084c
|
enable/disable allreduce_fusion
|
6 years ago |
mindspore-ci-bot
|
53d2da5fe4
|
!264 static_analysis: remove useless cache in TrivialPrimEvaluator and add cache for PythonPrimEvaluator
Merge pull request !264 from xychow/remove-unnecessary-cache-and-add-cache
|
6 years ago |
mindspore-ci-bot
|
ebc3f12b21
|
!620 [Auto parallel] Fix the code-style warnings in parallel-mode
Merge pull request !620 from Xiaoda/fix-codex-bot-warnings
|
6 years ago |
mindspore-ci-bot
|
54e0fa5c09
|
!556 [Auto Parallel] use DeviceMemory instead of fixed-size memory check
Merge pull request !556 from Chong/partition
|
6 years ago |
Xiaoda Zhang
|
ec043fcd56
|
fix the codex and bot warnings
|
6 years ago |