Xiaoda Zhang
364858cbc9
In sharding propagation, to keep strategy consistent of parameter being used by multiple operators, we check the edge with one node of TmpIdentityInfo
4 years ago
Xiaoda Zhang
6d7eaea884
1) fix the int64_t and size_t mixup problem; 2) avoid the <= in std::sort
4 years ago
i-robot
46e53a51c9
!26570 [Auto-par][d-rec] Change Onehot OP type to increase partitioning quality
Merge pull request !26570 from petitquentin/Gather_version_update
4 years ago
haoran.wang
2127c6411e
Modify OneHot as eliminated op in D-Rec
4 years ago
i-robot
7559d5b798
!26494 [Auto parallel] Adjusting sharding propagation
Merge pull request !26494 from Xiaoda/102-adjusting-sharding-propagation
4 years ago
i-robot
181b084955
!26397 [Auto-Par][D-rec]reduce cyclomatic complexity
Merge pull request !26397 from Hongxing/Gather_version_update
4 years ago
i-robot
8931e1806e
!26462 [Auto parallel] Fix some codestyle warnings
Merge pull request !26462 from Xiaoda/104-fix-some-codestyles-on-r1.5
4 years ago
Xiaoda Zhang
df67e74eaf
making sharding_propagation smooth, add a reshape justification:
1) when propagate sharding strategy from one op to another, try to find the strategy with zero communication cost;
2) if there is no such strategy, find the strategy with minimum communication cost, and raise a warning;
4 years ago
Xiaoda Zhang
9b834e1502
fix some codestyle warnings
4 years ago
hongxing
24cd398e22
optimized the cyclomatic complexity
4 years ago
i-robot
1233febba3
!26163 [AUTO-PAR][D-REC]Treat Softmax OP at strategy generation step
Merge pull request !26163 from Arthur/Gather_version_update
4 years ago
haoran.wang
9b13da25cc
Modify Softmax as eliminated op in D-Rec
4 years ago
i-robot
7a73bae5c3
!26036 add output strategy for matmul operator
Merge pull request !26036 from yangzhenzhang/add-output-strategy-for-op-init
4 years ago
Xiaoda Zhang
a772767265
support reshape in sharding propagation:
1) using 'swc index of strategy_cost_' as reshape's selected strategy;
2) when encountering reshape in BFS, select the 'swc index' with zero communication cost;
3) when encountering a reshape that is already visited, check whether there exists communication between reshape and current operator. It is OK if communication happens between two configured operators;
4) currently, two consecutive reshapes are not supported;
5) adjusting BFS structure in graph_costmodel.cc;
6) adjusting some code in step_auto_parallel.cc to avoid cyclomatic complexity.
4 years ago
yangzhenzhang
8431ba616c
add output strategy for op init
4 years ago
haoran.wang
418fdedbc8
remove MatMul HCCL restriction
4 years ago
yangzhenzhang
6ad6304b77
add output strategy
4 years ago
haoran.wang
e4eafa8d7c
Gather bug fixed and delete PrepareGatherV2 func which would never be used
4 years ago
b00518648
ef715a54a0
clean code
4 years ago
b00518648
ea50695cae
pclint
4 years ago
Bert0108
2d3d0b673e
parallel operators for ResizeBilinear and ResizeNearestNeighbor
4 years ago
i-robot
75a7cf3bee
!23351 add uniformreal parallel ops info
Merge pull request !23351 from wangjun/uniformreal_parallel_ops
4 years ago
wangjun
05443d4e74
add uniform_real parallel op_info
4 years ago
Xiaoda Zhang
d3c18e2e35
fix code style warnings
4 years ago
i-robot
79b6d06cbf
!23239 [AutoPar - DRec] fix codedex warnning
Merge pull request !23239 from Chong/xtan
4 years ago
i-robot
f112c42027
!23434 [CT][MS][parallel] fix VirtualData outgoing op strategy copying bug
Merge pull request !23434 from Chong/PanGu-VirtualDataset
4 years ago
linqingke
6af059dac3
fastgelu update.
4 years ago
haoran.wang
01b8d93f6f
VirtualDataset following op copy bug fix & StridedSlice strategy generation protection
4 years ago
Kelun
d31b98e72c
Modify max_cut for float compare
4 years ago
i-robot
46a46f649d
!22970 [AutoPar-DRec] avoid VirtualDataset outgoing operators strategy expensive data movement
Merge pull request !22970 from Chong/PanGu-Daniel_code
4 years ago
haoran.wang
56b7ff7e27
Strategy copy of the operators following VirtualDataset
4 years ago
klchai
eee211cc24
fix float points equality
4 years ago
yao_yf
68dd138462
add parallel sparse attention ops: dsd_matmul
4 years ago
yao_yf
b8a9cbe2a3
add cus_matmul_dds parallel ops
4 years ago
djc
b077aa1cab
[feat] [assistant] [I3T96T] add new Dataset operator CMUARCTICDataset
4 years ago
djc
4e6f7dc97d
[feat] [assistant] [I3T96X] add new Dataset operator LibriSpeechDataset
4 years ago
haoran.wang
f65bfe1b00
VirtualDataset strategy follows full_batch instruction in D-Rec
4 years ago
yangzhenzhang
80e5cc0e52
add parallel op for gatherd
4 years ago
Xiaoda Zhang
04381273b3
Add the sharding propagation function:
1) users configure sharding strategies for operators;
2) framework will propagate the strategies from configured-ops to
non-configured ops using BFS;
3) the propagation goal is to minimize redistribution communication
cost;
5 years ago
lichenever
108967ff7d
fix_pipeline_split_bug
4 years ago
yangzhenzhang
24370b5613
add parallel op for maxpool
4 years ago
Xiaoda Zhang
2e6b9c8e15
fix some codestyle warnings
4 years ago
Xiaoda Zhang
07e1e39a82
fix some codestyle warnings
4 years ago
Xiaoda Zhang
b587969763
Fix some codestyle warnings:
1. Add some const variables to represent the input length of operators,
and use them to eliminate some magic numbers;
2. Rename some local variables to distingush other variables in
different ranges;
3. Add some blank rows in python files.
5 years ago
Xiaoda Zhang
5fecfe92a6
code style warnings fixing
5 years ago
yao_yf
21276408b8
parallel virtual_out_ops
5 years ago
yangzhenzhang
c2ca2232c5
add select op
5 years ago
yangzhenzhang
9cdd70433f
add scatterupdate op
5 years ago
yangzhenzhang
d070af122f
add topk op
5 years ago
yangzhenzhang
f9f5df368e
add gathernd op
5 years ago