mindspore-ci-bot
5d96d0f7e9
!14583 3d graph format select reconstruct
From: @liubuyu
Reviewed-by: @kisnwang,@jjfeing
Signed-off-by: @jjfeing
5 years ago
yepei6
ca03a24083
correct the grammar error
5 years ago
liubuyu
40f34b0d90
3d graph reconstruct
5 years ago
mindspore-ci-bot
0ef2d78411
!14133 tensorprint_debug
From: @yepei6
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
75fdaaa6aa
!14304 [GraphKernel] Dump GraphKernel split info as text; dump akg kernel launch fail message
From: @dayschan
Reviewed-by: @gaoxiong1,@gaoxiong1,@anyrenwei
Signed-off-by: @anyrenwei
5 years ago
yepei6
5da7fb36c5
modify the tensorprint handle create process
5 years ago
dayschan
3c6c30024c
dump graph_kernel_split info
5 years ago
mindspore-ci-bot
7f4994af7c
!14186 Support while bprop
From: @liangzelang
Reviewed-by: @kisnwang,@jjfeing
Signed-off-by: @jjfeing
5 years ago
mindspore-ci-bot
e9ada9fd1d
!14192 add the force transform to avoid the utf8 error
From: @yepei6
Reviewed-by: @kingxian,@kisnwang
Signed-off-by: @kingxian
5 years ago
mindspore-ci-bot
ad140a8bf4
!14084 [GraphKernel] support matmul on D
From: @lingyunli63
Reviewed-by:
Signed-off-by:
5 years ago
yepei6
ce3597b727
add the force transform to avoid utf8 error
5 years ago
liangzelang
ba65fb9f3c
Support non-tail recursive graphs
5 years ago
lingyunli63
4b966ed40d
support matmul on D
5 years ago
mindspore-ci-bot
2d73a35793
!14056 tensorprint segmentation
From: @yepei6
Reviewed-by: @kingxian,@kisnwang
Signed-off-by: @kingxian
5 years ago
yepei6
0f28c1aa19
add the force cast to avoid the segmentation
5 years ago
mindspore-ci-bot
18e98c6a0b
!13720 【GraphKernel】Add context graph_kernel_flags
From: @dayschan
Reviewed-by: @gaoxiong1
Signed-off-by:
5 years ago
dayschan
11ee3b1624
add context graph_kernel_flags
used the flag "opt_level" to control GraphKernel,
0 means disabled while non-zero value means enabled.
the default value is controlled by context "enable_graph_kernel",
but if it's also set in "graph_kernel_flags", then the flag will prevail.
supported the whitelist and blacklist operators for GraphKernelExpander.
"enable_expand_ops", "enable_expand_ops_only", "disable_expand_ops".
5 years ago
liuxiao93
723bbac438
revert nn.BatchNorm3d.
5 years ago
mindspore-ci-bot
efe95ebbce
!13724 optimize execute order for commops
From: @kisnwang
Reviewed-by: @zhoufeng54,@jjfeing
Signed-off-by: @jjfeing
5 years ago
kswang
dc543f3f1e
optimize execute order for commops
5 years ago
mindspore-ci-bot
cf5eaf8590
!13050 Don't insert UpdateState for HyperMap func graph call, move auto monad eliminator out from CSE, and eliminate auto monad nodes for output node.
From: @zh_qh
Reviewed-by:
Signed-off-by:
5 years ago
Zhang Qinghua
e853df4ecd
Don't insert UpdateState for HyperMap func graph call.
Move auto monad eliminator out from CSE.
Eliminate auto monad nodes for output node.
5 years ago
dingpeifei
87e41aaeee
IR operators of GPU and CPU are unified as batchnorm
5 years ago
ms_yan
92e86804e1
init add acltdt handle create and destory
add hostpush part modify
optimize previous code
provide aclhandle access method
modify CMakeList format
add device_id parameter into TransferNode
update acltdt api
5 years ago
mindspore-ci-bot
2013e3f370
!13216 If data_format is NCDHW, BatchNorm to BatchNorm3D.
From: @liu_xiao_93
Reviewed-by: @liangchenghui,@wuxuejian
Signed-off-by: @liangchenghui
5 years ago
mindspore-ci-bot
654771df13
!13080 fix embeddinglookup infer
From: @fangzehua
Reviewed-by:
Signed-off-by:
5 years ago
liuxiao93
d44c706baf
batchnorm to batchnorm3d.
5 years ago
fangzehua
dadbd54f0e
add embedding infer
5 years ago
l00591931
bbdb050fc7
Change switch to Switch
5 years ago
mindspore-ci-bot
54c37bcd61
!12947 Add MaxPool3D,MaxPool3DGrad,MaxPool3DGradGrad ops for Ascend.
From: @liu_xiao_93
Reviewed-by: @liangchenghui
Signed-off-by: @liangchenghui
5 years ago
mindspore-ci-bot
c69142fdc1
!12968 update reshape type for 3d nodes
From: @liubuyu
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
54fc5e0d2b
!12234 [GraphKernel] Support pipeline optimization for parallel fusion.
From: @tronzhang
Reviewed-by:
Signed-off-by:
5 years ago
liuxiao93
35f6ba9011
Add MaxPool3D,MaxPool3DGrad,MaxPool3DGradGrad ops for Ascend.
5 years ago
liubuyu
518818fbef
reshape type for 3d nodes
5 years ago
mindspore-ci-bot
e00d8cd1d6
!13020 Not save InitDatasetQueue and GetNext op in PyNative Mode
From: @HulkTang
Reviewed-by: @zhoufeng54,@chujinjin
Signed-off-by: @chujinjin
5 years ago
tronzhang
7252ffb66b
pipeline optimization for parallel fusion
5 years ago
tanghuikang
6102202abd
Not save InitDatasetQueue and GetNext op in PyNative Mode
5 years ago
mindspore-ci-bot
fa4c19f938
!13002 3d format bug fix
From: @liubuyu
Reviewed-by: @zhoufeng54,@kisnwang
Signed-off-by: @kisnwang
5 years ago
liubuyu
62aa7d0e87
bug fix for 3d format
5 years ago
He Wei
891fd7df92
[auto-monad] Refactor ascend_auto_monad
1. Remove output parameter pool for ascend control flow;
2. Remove duplicate code for Switch and SwitchLayer;
3. Add 'return' attribute to label_goto that used for return;
4. Disable tail call optimize for graphs with 'recursive' flag.
5 years ago
Zhang Qinghua
df866f7248
Add TopoSort Rhs First attribute for special CNode, such as Depend CNode with isolated nodes.
5 years ago
mindspore-ci-bot
423dcfc917
!12836 Change return in core_ops
From: @liangzhibo
Reviewed-by: @kingxian,@jpc_chenjianping
Signed-off-by: @kingxian
5 years ago
mindspore-ci-bot
2f312dac66
!12091 Performance optimization for PyNative AllReduce
From: @jojobugfree
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
4365c332e6
!12813 unify AvgPoolGrad's MindIR
From: @yuchaojie
Reviewed-by: @kisnwang
Signed-off-by:
5 years ago
mindspore-ci-bot
c529cfa427
!12754 auto tune step one construct json
From: @liubuyu
Reviewed-by:
Signed-off-by:
5 years ago
l00591931
cf7c5840e3
Change return
5 years ago
yuchaojie
d2cb3aa1c2
unify AvgPoolGrad
5 years ago
caifubi
171b468bb3
PyNative AllReduce Bucket
5 years ago
liubuyu
2d97244741
auto tune stage one: construct json
5 years ago
simson
c29d8f66d8
fix precision error after cache modification
5 years ago