mindspore-ci-bot
1e13cbb8da
!11241 optimize the dynamic memory alloc
From: @limingqi107
Reviewed-by: @cristoval,@kisnwang
Signed-off-by: @kisnwang
5 years ago
lilei
9a45c4419c
modify batch_normal
5 years ago
limingqi107
fd9f91b6c9
optimize the dynamic memory alloc
5 years ago
mindspore-ci-bot
f87b5e0cc8
!11144 Parallel Conflict Computing in SOMAS
From: @laiyongqiang
Reviewed-by:
Signed-off-by:
5 years ago
laiyongqiang
af3c98e6ad
support parallel computing for conflicts
5 years ago
liangzelang
a97ac180ba
opt get_func by uniforming format
5 years ago
mindspore-ci-bot
5b751cfa4a
!11093 【GraphKernel】Add a restriction for getitem in basic_ops_fusion
From: @dayschan
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago
laiyongqiang
56d7dd294b
adapt to None Type Input
5 years ago
mindspore-ci-bot
bb2870fe71
!10922 optimize somas logs
From: @laiyongqiang
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
92a85d1061
!11075 dynamic op re primitive when infer
From: @liubuyu
Reviewed-by: @kisnwang,@zhoufeng54
Signed-off-by: @zhoufeng54
5 years ago
mindspore-ci-bot
f466776bde
!11052 fix bug of sparse unify mindir
From: @hwjiaorui
Reviewed-by:
Signed-off-by:
5 years ago
dayschan
b9b4a5e5f7
Add a restriction for getitem in basic_ops_fusion.
this commit reverts the modification for basic_ops_fusion.cc in 8af78cd5c,
the getitem should be fused with its all users.
(no bug. but when the network is large, it works very slowly, this's a temporary solution)
5 years ago
laiyongqiang
b3aa620ab9
optimize somas logs:
1. add parameter info
2. update atomic clean inputs
3. remove somas_mem_pool.ir
5 years ago
liubuyu
39cc9e70cd
dynamic op re primitive when infer
5 years ago
hwjiaorui
908e9a526b
tile's input multiples
new mul node
new mul node
convert const to attr
5 years ago
yuchaojie
54d21ab1a3
fix dropout pynative unify_ir pattern
5 years ago
hwjiaorui
eb973172d2
unify mindir sparse
sparse loss
format
add string head
context
5 years ago
jjfeing
389da54525
fix dropout unify_mindir pass
5 years ago
dayschan
8af78cd5ce
Added ExpandDims into GPU fusion list
what's more:
remove one restriction of getitem in ops fusion.
add a while loop for the ShapeOpsSplitter pass.
add ExpandDims into shape_ops list.
5 years ago
mindspore-ci-bot
275da50b32
!10474 Add recomputed pass
From: @ginfung
Reviewed-by:
Signed-off-by:
5 years ago
yujianfeng
7b412d7cb2
add recompute nodes
5 years ago
mindspore-ci-bot
8331550e30
!10738 add reorder_ops pass in graph kernel
From: @looop5
Reviewed-by: @lingyunli63,@ckey_dou,@gaoxiong1
Signed-off-by: @ckey_dou
5 years ago
mindspore-ci-bot
0212e19bc9
!10806 insert reformat op
From: @liubuyu
Reviewed-by: @zhoufeng54,@kisnwang
Signed-off-by: @kisnwang
5 years ago
mindspore-ci-bot
bbaf1f8bc1
!10762 remove gap tensor in somas model build
From: @laiyongqiang
Reviewed-by: @kisnwang,@zhoufeng54
Signed-off-by: @zhoufeng54
5 years ago
liubuyu
119c7010a4
insert reformat op
5 years ago
laiyongqiang
c686a30cc8
remove gap tensor in somas model build
5 years ago
yanghaoran
b1ee7d9926
Synchronize latest Ascend software suite 29 Dec 2020
5 years ago
mindspore-ci-bot
7e0c727ace
!10432 keep nop node in execution order if it's graph's output
From: @liubuyu
Reviewed-by: @zhoufeng54,@kisnwang
Signed-off-by: @kisnwang
5 years ago
looop5
0a62d42d65
add reorder_ops pass in graph kernel
5 years ago
mindspore-ci-bot
b35435cc02
!10676 do not merge cast to send and receive node
From: @lianliguang
Reviewed-by: @zhoufeng54,@kisnwang
Signed-off-by: @kisnwang
5 years ago
LianLiguang
da09b8e515
do not merge cast to receive & send op
5 years ago
mindspore-ci-bot
3bdd91f5be
!10154 Add a ReduceScatter fusion
From: @alouhahahahaha
Reviewed-by:
Signed-off-by:
5 years ago
alouhahaha
3da59427bc
ReduceScatter Fusion
5 years ago
laiyongqiang
fc732c8212
change summary input tensor to kLifeLongGraphAll
5 years ago
mindspore-ci-bot
a98ae4129a
!10236 Support if_by_if case by labelgoto with labelswitch
From: @liangzelang
Reviewed-by:
Signed-off-by:
5 years ago
liubuyu
d52a06a089
nop node keep in execution order if the node is graph's output
5 years ago
mindspore-ci-bot
dafd26196e
!10431 Split unsupport transdata when doing ref
From: @lianliguang
Reviewed-by: @zhoufeng54,@chujinjin
Signed-off-by: @chujinjin
5 years ago
liangzelang
2ab20c1b6e
fix if_by_if bug by replace labelgoto with labelswitch
5 years ago
LianLiguang
7c7da0cb77
split unsupported transdata after ref pass
5 years ago
mindspore-ci-bot
edc85f3c8d
!10367 handle no input hccl op and single input/output hccl op connect
From: @laiyongqiang
Reviewed-by: @kisnwang,@chujinjin
Signed-off-by: @chujinjin
5 years ago
mindspore-ci-bot
2484ca9f4a
!10228 fix bug of inset transdata in pynative mode
From: @lianliguang
Reviewed-by: @chujinjin,@kisnwang
Signed-off-by: @chujinjin
5 years ago
laiyongqiang
6bb011d69d
handle no input hccl op and single input/output hccl op connect
5 years ago
LianLiguang
393f22ff41
change insert transop of pynative
5 years ago
mindspore-ci-bot
639e0c5fbd
!10257 【GraphKernel】Enhance the fusion capacity for getitem nodes
From: @dayschan
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago
dayschan
26ac9167f8
Enhance the fusion capacity for getitem nodes.
fixbug in ReplaceNewFuseCNode
add a pass to eliminate repeated output after cse
fixbug in graph_kernel_splitter
do not fuse reshape op as output in costmodel.
5 years ago
LianLiguang
414f38df8d
do not merge tensor move to one in cse pass
5 years ago
mindspore-ci-bot
230f819fd0
!10230 fix codex warning in somas
From: @laiyongqiang
Reviewed-by: @zhoufeng54
Signed-off-by:
5 years ago
laiyongqiang
c1d2dbe991
fix codex and review bot warning
5 years ago
mindspore-ci-bot
88155de042
!10159 momentum weight fusion
From: @wilfchen
Reviewed-by: @cristoval,@limingqi107
Signed-off-by: @cristoval
5 years ago
wilfChen
09e10e18bb
momentum weightdecay fusion
5 years ago