mindspore-ci-bot
|
ebc3f12b21
|
!620 [Auto parallel] Fix the code-style warnings in parallel-mode
Merge pull request !620 from Xiaoda/fix-codex-bot-warnings
|
5 years ago |
mindspore-ci-bot
|
54e0fa5c09
|
!556 [Auto Parallel] use DeviceMemory instead of fixed-size memory check
Merge pull request !556 from Chong/partition
|
5 years ago |
Xiaoda Zhang
|
ec043fcd56
|
fix the codex and bot warnings
|
5 years ago |
ch-l
|
f806b72447
|
use DeviceMemory for memory control
|
5 years ago |
lichenever
|
c78630d737
|
support multiple subgraphs
|
6 years ago |
zhoufeng
|
c2b3360d69
|
update clang format rule
|
5 years ago |
mindspore-ci-bot
|
46acf23825
|
!405 [AutoParallel] Adapte rec-prog generator to new parser
Merge pull request !405 from Chong/str-gen
|
5 years ago |
mindspore-ci-bot
|
5b6b1ad727
|
!394 [AutoParallel] Simplify rec-prog parser mechanism
Merge pull request !394 from Chong/parser
|
5 years ago |
ch-l
|
c71234f383
|
improve rec-prog str generator
|
6 years ago |
mindspore-ci-bot
|
0cdb1171d5
|
!87 Take AllToAll as a virtual operator in cost model
Merge pull request !87 from gziyan/dev_alltoall
|
6 years ago |
Ziyan
|
0d208e00bd
|
Model ALLTOALL as a single operator in cost model; scale the ALLTOALL,
ALLGATHER, and REDUCESCATTER with different factors; change the BETA and
GAMMA value in cost model.
|
6 years ago |
Chong
|
b1f5e44cd4
|
improve parser
|
6 years ago |
ougongchang
|
0ed6d9178e
|
add Histogram summary operator
clean clang format errors and cpplint errors
add some test cases for histogram summary operator
|
6 years ago |
mindspore-ci-bot
|
6e183fcc0f
|
!385 [Auto parallel] Adjusting backward phase communication cost of some operators
Merge pull request !385 from Xiaoda/modify-communicaiton-cost-of-operators-and-redistribution
|
6 years ago |
Xiaoda Zhang
|
79de8f4bdf
|
Adjusting backward communication cost of some operators
|
6 years ago |
yangzhenzhang
|
36ffb66782
|
add parallel op for square
|
6 years ago |
yangzhenzhang
|
57cd9f8188
|
add parallel op for sigmoidloss
|
6 years ago |
yangzhenzhang
|
6d522f0a4f
|
add parallel op for layernorm
|
6 years ago |
mindspore-ci-bot
|
b2b3e24a8e
|
!329 [MS]support building on windows 10
Merge pull request !329 from chenjianping/building-on-windows-2
|
6 years ago |
chenjianping
|
1286767d0e
|
support building on windows
|
6 years ago |
mindspore-ci-bot
|
2961c6bc59
|
!349 fix coding style check warning for auto parallel
Merge pull request !349 from chentingting/fix_coding_style_check_warning
|
6 years ago |
c00425699
|
8765810528
|
fix_coding_style_check_warning
|
6 years ago |
Xiaoda Zhang
|
ffb2cb03a4
|
Change 'NOT_FULLY_USE_DEVICES' to 'FULLY_USE_DEVICES' and make ALL-1 user-specified-strategy valid in auto-parallel
|
6 years ago |
Xiaoda Zhang
|
0ac50a19f5
|
Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost.
|
6 years ago |
mindspore-ci-bot
|
39a46b9342
|
!245 Add bool type check in communication operator
Merge pull request !245 from chentingting/add_bool_type_check_in_comm_op
|
6 years ago |
mindspore-ci-bot
|
77725e81a4
|
!258 add_minimum_distributed_op
Merge pull request !258 from lichen/add_minimum_distributed_op
|
6 years ago |
Wei Luning
|
2fecdede6b
|
support amp when model eval, fix example of UnsortSegmentsSum
|
6 years ago |
c00425699
|
d62f560b50
|
add_bool_type_check_in_comm_op
|
6 years ago |
lichenever
|
b81cc6ea4f
|
add minimum distributed op
|
6 years ago |
mindspore-ci-bot
|
7bc2cee318
|
!167 add_squeeze_distributed_op
Merge pull request !167 from lichen/add_squeeze_distributed_op
|
6 years ago |
c00425699
|
c8cdb6b331
|
support distributed GatherV2 operator
|
6 years ago |
buxue
|
5841fe010e
|
Support pow's second input could be tensor and fix bug in bprop of pow
|
6 years ago |
lichenever
|
32cd280c1a
|
add squeeze distributed op
|
6 years ago |
mindspore-ci-bot
|
5141054ecd
|
!220 Add parallel operator for DropoutDoMask
Merge pull request !220 from yangzhenzhang/dropoutdomask
|
6 years ago |
chenzomi
|
e09f220f17
|
fix complite bug in clang
|
6 years ago |
yangzhenzhang
|
b34c0e7a17
|
add parallel op for dropoutdomask
|
6 years ago |
c00425699
|
b413638f23
|
refactor OperatorCostPtr in OperatorInfo
|
6 years ago |
mindspore-ci-bot
|
2e6e94b2b6
|
!177 prelu operator support parallel on the channel
Merge pull request !177 from yao_yf/fix_auto_parallel_prelu
|
6 years ago |
yao_yf
|
b5e3fa9593
|
fix auto parallel prelu
|
6 years ago |
Xiaoda Zhang
|
a153fad874
|
This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed.
|
6 years ago |
yangzhenzhang
|
dd0d4e6b84
|
add parallel ops for expand dims
|
6 years ago |
mindspore-ci-bot
|
a5a904fbdf
|
!91 fix bug for allreduce fusion and add resnet unit test
Merge pull request !91 from chentingting/allreduce_fusion_resnet
|
6 years ago |
lichenever
|
ff808021c7
|
register not equal distributed op
|
6 years ago |
c00425699
|
ab917a734d
|
fix bug for allreduce fusion and add resnet unit test
|
6 years ago |
mindspore-ci-bot
|
d8b460c780
|
!96 fix refkey bug for auto parallel
Merge pull request !96 from lichen/fix_ref_key_bug_for_auto_parallel
|
6 years ago |
lichenever
|
5240b1f603
|
fix refkey bug for auto parallel
|
6 years ago |
Su Teng
|
60b68a1470
|
sort include file in parallel dir
|
6 years ago |
Xiaoda Zhang
|
3d35792877
|
change_star_elimination: make the non-identity triangle_eliminatin exact
|
6 years ago |
mindspore-ci-bot
|
22a9c00bcd
|
!57 Add parallel operators for Neg and BatchMatMul
Merge pull request !57 from yangzhenzhang/master
|
6 years ago |
mindspore-ci-bot
|
569e5b056e
|
!53 [Auto parallel] Remove the redundant work in star elimination
Merge pull request !53 from Xiaoda/modify_star_and_triangle_elimination
|
6 years ago |