Xiaoda Zhang
|
3ff6e336c6
|
check cast from optimizer in auto-parallel
|
5 years ago |
mindspore-ci-bot
|
a663f2066c
|
!2285 [Code Review] code review fix
Merge pull request !2285 from jjfeing/master
|
5 years ago |
lichenever
|
563622874a
|
update
|
5 years ago |
jjfeing
|
c26274f324
|
fix code review bug
|
5 years ago |
Yi Huaijie
|
7857d59c82
|
dropout do mask only replace first input of dropout_gen_mask of the subgraph instead of the whole sub graph
|
5 years ago |
suteng
|
da586a6177
|
回退 'Pull Request !2078 : replace first input of dropout_gen_mask of the subgraph instead of the whole sub graph'
|
5 years ago |
mindspore-ci-bot
|
b1ff4c15c2
|
!2078 replace first input of dropout_gen_mask of the subgraph instead of the whole sub graph
Merge pull request !2078 from yihuaijie/dev
|
5 years ago |
Yi Huaijie
|
6c85fc9f9f
|
dropout do mask only replace first input of
dropout_gen_mask of the subgraph instead of
the whole sub graph.
|
5 years ago |
jjfeing
|
caab25e09b
|
tbe select broadcast reduce dynamic
|
5 years ago |
lichenever
|
e0e055a0b8
|
add sparse gatherv2
|
5 years ago |
Yi Huaijie
|
e5c351690b
|
support load full dataset on each device
|
5 years ago |
kingfo
|
38436f929f
|
move hook function to primtivePy class
|
5 years ago |
Xiaoda Zhang
|
1cfb52bc0e
|
add the reshape part of the embeddinglookup backward operator
|
5 years ago |
mindspore-ci-bot
|
d5f55f0820
|
!1769 [AutoParallel]GatherV2_support_host_device
Merge pull request !1769 from lichen/gatherv2_support_host_and_device
|
5 years ago |
BowenK
|
96379faa3a
|
Remove ZerosLikeTensor and sub with ZerosLike
|
5 years ago |
lichenever
|
1437966c98
|
gatherv2_support_host_and_device
|
5 years ago |
yangzhenzhang
|
19bd830539
|
support forward reduce scatter for matmul
|
5 years ago |
mindspore-ci-bot
|
f523a0f83c
|
!1600 [AutoParallel]Fix GatherV2 bug
Merge pull request !1600 from lichen/fix_auto_parallel_gatherv2_bug
|
5 years ago |
lichenever
|
c223fde566
|
fix auto parallel gatherv2 bug
|
5 years ago |
leopz
|
4508134ceb
|
add tensor_minnie and separate py from ir
|
5 years ago |
yangzhenzhang
|
7c237620ba
|
add sigmoid op
|
5 years ago |
mindspore-ci-bot
|
e87e6b38b0
|
!1355 [AutoParallel]Fix GatherV2 distributed op
Merge pull request !1355 from lichen/fix_gatherv2
|
5 years ago |
lichenever
|
390a86effb
|
fix gatherv2
|
5 years ago |
Xiaoda Zhang
|
9f4b8a3cd1
|
changing the succive edges order in GetAliveSuccEdges() so that Triangle and Star Elimination can be merged into particular node; adding some check information
|
6 years ago |
mindspore-ci-bot
|
d11dc8276d
|
!1181 fix gatherv2 replace graph in auto parallel
Merge pull request !1181 from yao_yf/fix_gather_v2_replace_graph
|
6 years ago |
yao_yf
|
06d35d8d18
|
fix gatherv2 replace graph in auto parallel
|
6 years ago |
mindspore-ci-bot
|
b124bf38a1
|
!1152 [AutoParallel] dynamic output shape handling for Reduce series & Squeeze
Merge pull request !1152 from Chong/support_squeeze_and_reduce
|
6 years ago |
hongxing
|
dc290d7959
|
support squeeze and reduce op
|
6 years ago |
Yi Huaijie
|
75ca84d260
|
INFO user when set_strategy not under [semi_]auto_parallel mode
|
6 years ago |
Xiaoda Zhang
|
a05aa21cc2
|
calculating PEAK memory cost in the inference phase
|
6 years ago |
yao_yf
|
b0921c15e9
|
xreshape tensor_redistrinution bug fix
|
6 years ago |
yao_yf
|
716def7c0a
|
move InferStraByTensorInfo to tensor_info.h
|
6 years ago |
mindspore-ci-bot
|
dd2062bf8d
|
!1023 add_gatherv2_distributed_op
Merge pull request !1023 from lichen/add_gatherv2_distributed_op
|
6 years ago |
lichenever
|
19a24b86ac
|
add gatherv2 distributed op
|
6 years ago |
yao_yf
|
f0bf438a55
|
reshape strategy search
|
6 years ago |
Xiaoda Zhang
|
def8573275
|
implementing-searching-strategy-for-inference
|
6 years ago |
yao_yf
|
5a6540450e
|
use param name as the key of strategy checkpoint
|
6 years ago |
yangzhenzhang
|
4750861054
|
fix layernorm bug
|
6 years ago |
yao_yf
|
425276d43d
|
auto parallel prelu support prelu
|
6 years ago |
ch-l
|
f806b72447
|
use DeviceMemory for memory control
|
6 years ago |
zhoufeng
|
c2b3360d69
|
update clang format rule
|
6 years ago |
ougongchang
|
0ed6d9178e
|
add Histogram summary operator
clean clang format errors and cpplint errors
add some test cases for histogram summary operator
|
6 years ago |
mindspore-ci-bot
|
6e183fcc0f
|
!385 [Auto parallel] Adjusting backward phase communication cost of some operators
Merge pull request !385 from Xiaoda/modify-communicaiton-cost-of-operators-and-redistribution
|
6 years ago |
Xiaoda Zhang
|
79de8f4bdf
|
Adjusting backward communication cost of some operators
|
6 years ago |
yangzhenzhang
|
36ffb66782
|
add parallel op for square
|
6 years ago |
yangzhenzhang
|
57cd9f8188
|
add parallel op for sigmoidloss
|
6 years ago |
yangzhenzhang
|
6d522f0a4f
|
add parallel op for layernorm
|
6 years ago |
mindspore-ci-bot
|
2961c6bc59
|
!349 fix coding style check warning for auto parallel
Merge pull request !349 from chentingting/fix_coding_style_check_warning
|
6 years ago |
c00425699
|
8765810528
|
fix_coding_style_check_warning
|
6 years ago |
Xiaoda Zhang
|
ffb2cb03a4
|
Change 'NOT_FULLY_USE_DEVICES' to 'FULLY_USE_DEVICES' and make ALL-1 user-specified-strategy valid in auto-parallel
|
6 years ago |