yangzhenzhang
|
4750861054
|
fix layernorm bug
|
6 years ago |
yao_yf
|
425276d43d
|
auto parallel prelu support prelu
|
6 years ago |
ch-l
|
f806b72447
|
use DeviceMemory for memory control
|
6 years ago |
zhoufeng
|
c2b3360d69
|
update clang format rule
|
6 years ago |
ougongchang
|
0ed6d9178e
|
add Histogram summary operator
clean clang format errors and cpplint errors
add some test cases for histogram summary operator
|
6 years ago |
mindspore-ci-bot
|
6e183fcc0f
|
!385 [Auto parallel] Adjusting backward phase communication cost of some operators
Merge pull request !385 from Xiaoda/modify-communicaiton-cost-of-operators-and-redistribution
|
6 years ago |
Xiaoda Zhang
|
79de8f4bdf
|
Adjusting backward communication cost of some operators
|
6 years ago |
yangzhenzhang
|
36ffb66782
|
add parallel op for square
|
6 years ago |
yangzhenzhang
|
57cd9f8188
|
add parallel op for sigmoidloss
|
6 years ago |
yangzhenzhang
|
6d522f0a4f
|
add parallel op for layernorm
|
6 years ago |
mindspore-ci-bot
|
2961c6bc59
|
!349 fix coding style check warning for auto parallel
Merge pull request !349 from chentingting/fix_coding_style_check_warning
|
6 years ago |
c00425699
|
8765810528
|
fix_coding_style_check_warning
|
6 years ago |
Xiaoda Zhang
|
ffb2cb03a4
|
Change 'NOT_FULLY_USE_DEVICES' to 'FULLY_USE_DEVICES' and make ALL-1 user-specified-strategy valid in auto-parallel
|
6 years ago |
Xiaoda Zhang
|
0ac50a19f5
|
Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost.
|
6 years ago |
mindspore-ci-bot
|
39a46b9342
|
!245 Add bool type check in communication operator
Merge pull request !245 from chentingting/add_bool_type_check_in_comm_op
|
6 years ago |
c00425699
|
d62f560b50
|
add_bool_type_check_in_comm_op
|
6 years ago |
lichenever
|
b81cc6ea4f
|
add minimum distributed op
|
6 years ago |
mindspore-ci-bot
|
7bc2cee318
|
!167 add_squeeze_distributed_op
Merge pull request !167 from lichen/add_squeeze_distributed_op
|
6 years ago |
c00425699
|
c8cdb6b331
|
support distributed GatherV2 operator
|
6 years ago |
buxue
|
5841fe010e
|
Support pow's second input could be tensor and fix bug in bprop of pow
|
6 years ago |
lichenever
|
32cd280c1a
|
add squeeze distributed op
|
6 years ago |
mindspore-ci-bot
|
5141054ecd
|
!220 Add parallel operator for DropoutDoMask
Merge pull request !220 from yangzhenzhang/dropoutdomask
|
6 years ago |
chenzomi
|
e09f220f17
|
fix complite bug in clang
|
6 years ago |
yangzhenzhang
|
b34c0e7a17
|
add parallel op for dropoutdomask
|
6 years ago |
c00425699
|
b413638f23
|
refactor OperatorCostPtr in OperatorInfo
|
6 years ago |
mindspore-ci-bot
|
2e6e94b2b6
|
!177 prelu operator support parallel on the channel
Merge pull request !177 from yao_yf/fix_auto_parallel_prelu
|
6 years ago |
yao_yf
|
b5e3fa9593
|
fix auto parallel prelu
|
6 years ago |
Xiaoda Zhang
|
a153fad874
|
This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed.
|
6 years ago |
yangzhenzhang
|
dd0d4e6b84
|
add parallel ops for expand dims
|
6 years ago |
lichenever
|
ff808021c7
|
register not equal distributed op
|
6 years ago |
Su Teng
|
60b68a1470
|
sort include file in parallel dir
|
6 years ago |
yangzhenzhang
|
110640e2ad
|
add parallel ops for neg and batchmatmul
|
6 years ago |
c00425699
|
3bb48ffee1
|
use std::vector instead of std::list to promote performance for parallel module
|
6 years ago |
zhunaipan
|
930a1fb0a8
|
initial version
Signed-off-by: leonwanghui <leon.wanghui@huawei.com>
|
6 years ago |