mindspore-ci-bot
4e741f8aa6
!16701 gpu matmul and biasadd fusion
From: @wilfchen
Reviewed-by: @cristoval,@limingqi107
Signed-off-by: @limingqi107
4 years ago
changzherui
d9e2da299d
Revert "!16599 c++ infer for conv2dbackpropfilter and conv2dbackpropinput"
This reverts commit 3be79efd808f54b6e66fa45f26e601e5a143ae76, reversing
changes made to cf4479756a .
4 years ago
wangnan39@huawei.com
937acff29b
dropout support dynamic shape
4 years ago
changzherui
ea04c4304f
Revert "!16693 add Conv2dTranspose"
This reverts commit a2f50fb7db2fe1ae588b11e08fc51fe2398114ef, reversing
changes made to 4735ec3296 .
4 years ago
changzherui
2c41833cfa
add Conv2dTranspose
4 years ago
wilfChen
b2242d13c4
gpu matmul biasadd fusion
4 years ago
caifubi
0928682655
Profiling support Control Flow
4 years ago
l00591931
befc7a9dea
Add primal_attr to link between forward and backward node attr
5 years ago
zuochuanyong
e7ea343738
add format transform pass on cpu
5 years ago
jjfeing
88c92cd263
clear parameter when param_info clone
5 years ago
mindspore-ci-bot
78469f6083
!15356 Support mem reuse in control flow and multi-call subgraphs
From: @liangzelang
Reviewed-by: @zhoufeng54,@kisnwang
Signed-off-by: @kisnwang
5 years ago
liangzelang
052a803c63
adapt to mem reuse
5 years ago
limingqi107
c937a22bda
add the actor link by auto monad
5 years ago
luopengting
727dc08bfa
fix TAINTED_SCALAR and capitalize constants
1. fix the tainted variable 'path'
2. capitalize constants in env_config_parser
3. move constants about attribute to common utils.h
5 years ago
liubuyu
40f34b0d90
3d graph reconstruct
5 years ago
mindspore-ci-bot
7f4994af7c
!14186 Support while bprop
From: @liangzelang
Reviewed-by: @kisnwang,@jjfeing
Signed-off-by: @jjfeing
5 years ago
liangzelang
ba65fb9f3c
Support non-tail recursive graphs
5 years ago
lingyunli63
4b966ed40d
support matmul on D
5 years ago
liuxiao93
723bbac438
revert nn.BatchNorm3d.
5 years ago
mindspore-ci-bot
efe95ebbce
!13724 optimize execute order for commops
From: @kisnwang
Reviewed-by: @zhoufeng54,@jjfeing
Signed-off-by: @jjfeing
5 years ago
kswang
dc543f3f1e
optimize execute order for commops
5 years ago
mindspore-ci-bot
cf5eaf8590
!13050 Don't insert UpdateState for HyperMap func graph call, move auto monad eliminator out from CSE, and eliminate auto monad nodes for output node.
From: @zh_qh
Reviewed-by:
Signed-off-by:
5 years ago
Zhang Qinghua
e853df4ecd
Don't insert UpdateState for HyperMap func graph call.
Move auto monad eliminator out from CSE.
Eliminate auto monad nodes for output node.
5 years ago
dingpeifei
87e41aaeee
IR operators of GPU and CPU are unified as batchnorm
5 years ago
mindspore-ci-bot
2013e3f370
!13216 If data_format is NCDHW, BatchNorm to BatchNorm3D.
From: @liu_xiao_93
Reviewed-by: @liangchenghui,@wuxuejian
Signed-off-by: @liangchenghui
5 years ago
mindspore-ci-bot
654771df13
!13080 fix embeddinglookup infer
From: @fangzehua
Reviewed-by:
Signed-off-by:
5 years ago
liuxiao93
d44c706baf
batchnorm to batchnorm3d.
5 years ago
fangzehua
dadbd54f0e
add embedding infer
5 years ago
l00591931
bbdb050fc7
Change switch to Switch
5 years ago
mindspore-ci-bot
54c37bcd61
!12947 Add MaxPool3D,MaxPool3DGrad,MaxPool3DGradGrad ops for Ascend.
From: @liu_xiao_93
Reviewed-by: @liangchenghui
Signed-off-by: @liangchenghui
5 years ago
mindspore-ci-bot
c69142fdc1
!12968 update reshape type for 3d nodes
From: @liubuyu
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
54fc5e0d2b
!12234 [GraphKernel] Support pipeline optimization for parallel fusion.
From: @tronzhang
Reviewed-by:
Signed-off-by:
5 years ago
liuxiao93
35f6ba9011
Add MaxPool3D,MaxPool3DGrad,MaxPool3DGradGrad ops for Ascend.
5 years ago
liubuyu
518818fbef
reshape type for 3d nodes
5 years ago
mindspore-ci-bot
e00d8cd1d6
!13020 Not save InitDatasetQueue and GetNext op in PyNative Mode
From: @HulkTang
Reviewed-by: @zhoufeng54,@chujinjin
Signed-off-by: @chujinjin
5 years ago
tronzhang
7252ffb66b
pipeline optimization for parallel fusion
5 years ago
tanghuikang
6102202abd
Not save InitDatasetQueue and GetNext op in PyNative Mode
5 years ago
mindspore-ci-bot
fa4c19f938
!13002 3d format bug fix
From: @liubuyu
Reviewed-by: @zhoufeng54,@kisnwang
Signed-off-by: @kisnwang
5 years ago
liubuyu
62aa7d0e87
bug fix for 3d format
5 years ago
He Wei
891fd7df92
[auto-monad] Refactor ascend_auto_monad
1. Remove output parameter pool for ascend control flow;
2. Remove duplicate code for Switch and SwitchLayer;
3. Add 'return' attribute to label_goto that used for return;
4. Disable tail call optimize for graphs with 'recursive' flag.
5 years ago
Zhang Qinghua
df866f7248
Add TopoSort Rhs First attribute for special CNode, such as Depend CNode with isolated nodes.
5 years ago
mindspore-ci-bot
423dcfc917
!12836 Change return in core_ops
From: @liangzhibo
Reviewed-by: @kingxian,@jpc_chenjianping
Signed-off-by: @kingxian
5 years ago
mindspore-ci-bot
2f312dac66
!12091 Performance optimization for PyNative AllReduce
From: @jojobugfree
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
4365c332e6
!12813 unify AvgPoolGrad's MindIR
From: @yuchaojie
Reviewed-by: @kisnwang
Signed-off-by:
5 years ago
mindspore-ci-bot
c529cfa427
!12754 auto tune step one construct json
From: @liubuyu
Reviewed-by:
Signed-off-by:
5 years ago
l00591931
cf7c5840e3
Change return
5 years ago
yuchaojie
d2cb3aa1c2
unify AvgPoolGrad
5 years ago
caifubi
171b468bb3
PyNative AllReduce Bucket
5 years ago
liubuyu
2d97244741
auto tune stage one: construct json
5 years ago
simson
c29d8f66d8
fix precision error after cache modification
5 years ago