mindspore-ci-bot
f3aaf9b20f
!16103 [MS][LITE] move mindir proto to core
From: @jianghui58
Reviewed-by: @zhoufeng54,@jpc_chenjianping
Signed-off-by: @jpc_chenjianping
4 years ago
jianghui58
d5ecee383a
move mindir proto to core
4 years ago
Ziyan
2a752f24bf
enable not fully use opt shard
5 years ago
zuochuanyong
e7ea343738
add format transform pass on cpu
4 years ago
mindspore-ci-bot
ac912672bf
!15871 Support pass args or|and kwargs for OP CreateInstance.
From: @zh_qh
Reviewed-by: @ginfung,@hwhewei
Signed-off-by: @hwhewei
4 years ago
mindspore-ci-bot
1e7866eb14
!15852 [GraphKernel]Disable GraphKernel in PynativeMode
From: @dayschan
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
4 years ago
Zhang Qinghua
cd7f7d40fb
Support pass args or|and kwargs for OP CreateInstance.
4 years ago
dayschan
490c0521ac
Disable GraphKernel in PynativeMode
4 years ago
jjfeing
88c92cd263
clear parameter when param_info clone
4 years ago
huangbingjian
302746d6a3
fix code check
4 years ago
mindspore-ci-bot
78469f6083
!15356 Support mem reuse in control flow and multi-call subgraphs
From: @liangzelang
Reviewed-by: @zhoufeng54,@kisnwang
Signed-off-by: @kisnwang
4 years ago
liangzelang
052a803c63
adapt to mem reuse
5 years ago
mindspore-ci-bot
ade65bac93
!15312 add the actor link by auto monad
From: @limingqi107
Reviewed-by: @cristoval,@wilfchen
Signed-off-by: @wilfchen
4 years ago
limingqi107
c937a22bda
add the actor link by auto monad
4 years ago
tronzhang
8ff3c16778
add swtich for parallel fusion and default is off
4 years ago
mindspore-ci-bot
cd002cb7f7
!14893 enable stitch fusion on bert
From: @r1chardf1d0
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago
r1chardf1d0
3b32995936
enable stitch fusion on bert
5 years ago
zhoufeng
f8248d61b9
init plog when opentsd
Signed-off-by: zhoufeng <zhoufeng54@huawei.com>
5 years ago
luopengting
727dc08bfa
fix TAINTED_SCALAR and capitalize constants
1. fix the tainted variable 'path'
2. capitalize constants in env_config_parser
3. move constants about attribute to common utils.h
5 years ago
mindspore-ci-bot
5d96d0f7e9
!14583 3d graph format select reconstruct
From: @liubuyu
Reviewed-by: @kisnwang,@jjfeing
Signed-off-by: @jjfeing
5 years ago
yepei6
ca03a24083
correct the grammar error
5 years ago
liubuyu
40f34b0d90
3d graph reconstruct
5 years ago
mindspore-ci-bot
0ef2d78411
!14133 tensorprint_debug
From: @yepei6
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
75fdaaa6aa
!14304 [GraphKernel] Dump GraphKernel split info as text; dump akg kernel launch fail message
From: @dayschan
Reviewed-by: @gaoxiong1,@gaoxiong1,@anyrenwei
Signed-off-by: @anyrenwei
5 years ago
yepei6
5da7fb36c5
modify the tensorprint handle create process
5 years ago
dayschan
3c6c30024c
dump graph_kernel_split info
5 years ago
mindspore-ci-bot
7f4994af7c
!14186 Support while bprop
From: @liangzelang
Reviewed-by: @kisnwang,@jjfeing
Signed-off-by: @jjfeing
5 years ago
mindspore-ci-bot
e9ada9fd1d
!14192 add the force transform to avoid the utf8 error
From: @yepei6
Reviewed-by: @kingxian,@kisnwang
Signed-off-by: @kingxian
5 years ago
mindspore-ci-bot
ad140a8bf4
!14084 [GraphKernel] support matmul on D
From: @lingyunli63
Reviewed-by:
Signed-off-by:
5 years ago
yepei6
ce3597b727
add the force transform to avoid utf8 error
5 years ago
liangzelang
ba65fb9f3c
Support non-tail recursive graphs
5 years ago
lingyunli63
4b966ed40d
support matmul on D
5 years ago
mindspore-ci-bot
2d73a35793
!14056 tensorprint segmentation
From: @yepei6
Reviewed-by: @kingxian,@kisnwang
Signed-off-by: @kingxian
5 years ago
yepei6
0f28c1aa19
add the force cast to avoid the segmentation
5 years ago
mindspore-ci-bot
18e98c6a0b
!13720 【GraphKernel】Add context graph_kernel_flags
From: @dayschan
Reviewed-by: @gaoxiong1
Signed-off-by:
5 years ago
dayschan
11ee3b1624
add context graph_kernel_flags
used the flag "opt_level" to control GraphKernel,
0 means disabled while non-zero value means enabled.
the default value is controlled by context "enable_graph_kernel",
but if it's also set in "graph_kernel_flags", then the flag will prevail.
supported the whitelist and blacklist operators for GraphKernelExpander.
"enable_expand_ops", "enable_expand_ops_only", "disable_expand_ops".
5 years ago
liuxiao93
723bbac438
revert nn.BatchNorm3d.
5 years ago
mindspore-ci-bot
efe95ebbce
!13724 optimize execute order for commops
From: @kisnwang
Reviewed-by: @zhoufeng54,@jjfeing
Signed-off-by: @jjfeing
5 years ago
kswang
dc543f3f1e
optimize execute order for commops
5 years ago
mindspore-ci-bot
cf5eaf8590
!13050 Don't insert UpdateState for HyperMap func graph call, move auto monad eliminator out from CSE, and eliminate auto monad nodes for output node.
From: @zh_qh
Reviewed-by:
Signed-off-by:
5 years ago
Zhang Qinghua
e853df4ecd
Don't insert UpdateState for HyperMap func graph call.
Move auto monad eliminator out from CSE.
Eliminate auto monad nodes for output node.
5 years ago
dingpeifei
87e41aaeee
IR operators of GPU and CPU are unified as batchnorm
5 years ago
ms_yan
92e86804e1
init add acltdt handle create and destory
add hostpush part modify
optimize previous code
provide aclhandle access method
modify CMakeList format
add device_id parameter into TransferNode
update acltdt api
5 years ago
mindspore-ci-bot
2013e3f370
!13216 If data_format is NCDHW, BatchNorm to BatchNorm3D.
From: @liu_xiao_93
Reviewed-by: @liangchenghui,@wuxuejian
Signed-off-by: @liangchenghui
5 years ago
mindspore-ci-bot
654771df13
!13080 fix embeddinglookup infer
From: @fangzehua
Reviewed-by:
Signed-off-by:
5 years ago
liuxiao93
d44c706baf
batchnorm to batchnorm3d.
5 years ago
fangzehua
dadbd54f0e
add embedding infer
5 years ago
l00591931
bbdb050fc7
Change switch to Switch
5 years ago
mindspore-ci-bot
54c37bcd61
!12947 Add MaxPool3D,MaxPool3DGrad,MaxPool3DGradGrad ops for Ascend.
From: @liu_xiao_93
Reviewed-by: @liangchenghui
Signed-off-by: @liangchenghui
5 years ago
mindspore-ci-bot
c69142fdc1
!12968 update reshape type for 3d nodes
From: @liubuyu
Reviewed-by:
Signed-off-by:
5 years ago