lichenever
3398c2d74b
remove_32byte_limit_for_cpu_gatherv2
5 years ago
lirongzhen1
40251d9578
configure auto parallel tensors shape
5 years ago
mindspore-ci-bot
512d8e8510
!2687 [CT][MS][Auto-Parallel]Double recursion does not support the gatherv2 operator
Merge pull request !2687 from Chong/zc
5 years ago
hongxing
9febf7fdf5
support GatherV2P
5 years ago
mindspore-ci-bot
ca1d2436b8
!2286 enable optimizer parallel with broadcast
Merge pull request !2286 from gziyan/optimizer_parallel
5 years ago
Ziyan
0925e35252
enable optimizer parallel with broadcast
5 years ago
yao_yf
65760a2f3c
reshape cost computing dont create hccl group
5 years ago
mindspore-ci-bot
a32d674cc5
!2569 [CT][ME][parallel] One-hot runs failed in RP-search mode
Merge pull request !2569 from Chong/zc
5 years ago
hongxing
1767c6386d
fix cppcheck error
5 years ago
mindspore-ci-bot
7345d7471b
!2530 [CT][ME][parallel] One-hot runs failed in RP-search mode
Merge pull request !2530 from Chong/zc
5 years ago
mindspore-ci-bot
a4fbdc3f69
!2245 Decouple ir::Tensor from python.
Merge pull request !2245 from hewei/decouple_tensor
5 years ago
mindspore-ci-bot
0a368494db
!2499 HostAllGather and HostReduceScatter change to internal interface
Merge pull request !2499 from yihuaijie/master
5 years ago
He Wei
43e0967024
Decouple ir::Tensor class from python
5 years ago
hongxing
7029bc5dd3
fix onehot axis
5 years ago
mindspore-ci-bot
890e4cbf4a
!2458 dont create hccl group in auto parallel strategy search
Merge pull request !2458 from yao_yf/log_fix
5 years ago
yao_yf
ad06ab4049
dont create hccl group in auto parallel strategy search
5 years ago
Yi Huaijie
2eb739de6e
change HostAllGather and HostReduceScatter to internal interface
5 years ago
mindspore-ci-bot
5b14292f69
!2140 Implementation of mindspore debugger
Merge pull request !2140 from ShidaHe/debugger_dev
5 years ago
mindspore-ci-bot
4c269d0702
!2474 [CT][ME] L2-norm runs failed in RP-search mode
Merge pull request !2474 from Chong/zc
5 years ago
Shida He
4c056855e0
Implementation for mindspore debugger
5 years ago
hongxing
8e20d4d84e
fix l2normalize/prelu/softmax cost
5 years ago
mindspore-ci-bot
f975963a58
!2376 [CT][ME][parallel] fixed One-hot runs failed in RP-search mode.
Merge pull request !2376 from Chong/zc
5 years ago
mindspore-ci-bot
932b7649e7
!2241 Adapting operator named AccumulateNV2
Merge pull request !2241 from zhangzheng/accumulate
5 years ago
zhangz0911gm
4ac1876237
Adapting AccumulateNV2
6 years ago
Xiaoda Zhang
3ff6e336c6
check cast from optimizer in auto-parallel
5 years ago
gong chen
a6dfa281ea
Init GraphKernel.
- It provides a unified style to express graph and kernel for user.
- It provides a unified IR to represent graph and kernel for developer.
- It breaks the boundary between graph and kernel.
- It provides more opportunities to do compile optimization.
5 years ago
hongxing
948ea950af
add BatchParallel cost
5 years ago
mindspore-ci-bot
a663f2066c
!2285 [Code Review] code review fix
Merge pull request !2285 from jjfeing/master
5 years ago
lichenever
563622874a
update
5 years ago
jjfeing
c26274f324
fix code review bug
5 years ago
mindspore-ci-bot
ff0590315c
!2030 [AutoParallel] use replacement instead of recreation for edges in rec prog parse
Merge pull request !2030 from Chong/ReID
5 years ago
hongxing
39790ccf66
Optimize code
5 years ago
Yi Huaijie
7857d59c82
dropout do mask only replace first input of dropout_gen_mask of the subgraph instead of the whole sub graph
5 years ago
suteng
da586a6177
回退 'Pull Request !2078 : replace first input of dropout_gen_mask of the subgraph instead of the whole sub graph'
5 years ago
mindspore-ci-bot
b1ff4c15c2
!2078 replace first input of dropout_gen_mask of the subgraph instead of the whole sub graph
Merge pull request !2078 from yihuaijie/dev
5 years ago
Yi Huaijie
6c85fc9f9f
dropout do mask only replace first input of
dropout_gen_mask of the subgraph instead of
the whole sub graph.
5 years ago
jjfeing
caab25e09b
tbe select broadcast reduce dynamic
5 years ago
lichenever
e0e055a0b8
add sparse gatherv2
5 years ago
Yi Huaijie
e5c351690b
support load full dataset on each device
5 years ago
mindspore-ci-bot
41198aa21c
!1954 [Auto parallel] Fix some codestyle warnings in parallel module
Merge pull request !1954 from Xiaoda/1-fix-some-codestyle-warnings-in-parallel-module
5 years ago
mindspore-ci-bot
7b7932ced9
!1944 move hook function to PrimitivePy class
Merge pull request !1944 from wangqiuliang/move-hook-function-to-primitivepy
5 years ago
mindspore-ci-bot
c1c683eea8
!1938 [AutoParallel] limit GatherV2, BN and Softmax to data parallel
Merge pull request !1938 from Chong/ReID
5 years ago
hongxing
bee57fda66
support GatherV2 + Depend
5 years ago
Xiaoda Zhang
9af82ace42
fix some codestyle warnings
5 years ago
kingfo
38436f929f
move hook function to primtivePy class
5 years ago
mindspore-ci-bot
f6b5b2732f
!1892 [AutoParallel] limit partition dimension to adapt new HCCL's constrait
Merge pull request !1892 from Chong/ReID
5 years ago
hongxing
158495d43a
hccl patch + update ConstructNodes + support Softmax
5 years ago
mindspore-ci-bot
4a29f2733b
!1743 [AutoParallel] take acount of tuple input to prevent sub-graph isolation
Merge pull request !1743 from Chong/ReID
5 years ago
hongxing
2031710d95
fix bug and optimize code
5 years ago
Xiaoda Zhang
1cfb52bc0e
add the reshape part of the embeddinglookup backward operator
5 years ago