mindspore-ci-bot
cd337ddb7f
!16809 fix lenet model coredump while matmul's shape not match
From: @zhangzhaoju
Reviewed-by: @zh_qh,@ginfung
Signed-off-by: @zh_qh
4 years ago
mindspore-ci-bot
4e741f8aa6
!16701 gpu matmul and biasadd fusion
From: @wilfchen
Reviewed-by: @cristoval,@limingqi107
Signed-off-by: @limingqi107
4 years ago
zhangzhaoju
007013b5af
issue#I3A1EG coredump while input lshape not match for lenet
4 years ago
dayschan
2ac8c65327
Add GraphKernelPassManager to manage the passes of GraphKernel
Refactor the original "PassManager" class, and derive the "GraphKernelPassManager"
GraphKernel's ir files are dumped into a new sub-directory "graph_kernel" in the original "verbose_ir_files"
All GraphKernel's passes are divided into 3 levels, and controlled by the flag "opt_level" by default.
when the opt_level is greaterequal to the pass's level, this pass will run.
The default "opt_level" is 2 when GraphKernel is enabled.
Levels:
1. Basic features, like cluster, splitter, and some preprocess, postprocess.
2. All stable features, mainly includes the optimization passes.
3. Experimental features, like stitch-fusion, parallel-fusion.
The two flags "enable_pass" and "disable_pass" are available in this commit.
User can manually enable some passes when it's disabled by "opt_level", or disable the enabled passes,
by specifying that pass in this format: "stage_id.pass_id" or "stage_name.pass_name", multiple passes are separated by comma(",")
the stage/pass index and stage/pass name can be found from the ir filename.
e.g. "--enable_pass=cluster.graph_kernel_expander,1.1,1.2"
Others:
1. the pass "tensor_promotion" is not useful, remove it.
2. put the pass "InsertPadOps" before "ArithmeticSimplify".
4 years ago
changzherui
d9e2da299d
Revert "!16599 c++ infer for conv2dbackpropfilter and conv2dbackpropinput"
This reverts commit 3be79efd808f54b6e66fa45f26e601e5a143ae76, reversing
changes made to cf4479756a .
4 years ago
mindspore-ci-bot
11177f028a
!16530 dropout support dynamic shape
From: @wangnan39
Reviewed-by: @ginfung,@liangchenghui
Signed-off-by: @liangchenghui
4 years ago
wangnan39@huawei.com
937acff29b
dropout support dynamic shape
4 years ago
lizhenyu
2df0e89a6a
clean pc-lint warnings
4 years ago
changzherui
ea04c4304f
Revert "!16693 add Conv2dTranspose"
This reverts commit a2f50fb7db2fe1ae588b11e08fc51fe2398114ef, reversing
changes made to 4735ec3296 .
4 years ago
changzherui
2c41833cfa
add Conv2dTranspose
4 years ago
wilfChen
b2242d13c4
gpu matmul biasadd fusion
4 years ago
mindspore-ci-bot
6ea8a3ed22
!16551 profiling control flow
From: @jojobugfree
Reviewed-by: @kisnwang,@zhoufeng54
Signed-off-by: @zhoufeng54
4 years ago
lvliang
32da2cea42
create multi make tuple for nested tuple output
5 years ago
lvliang
b576106623
add_cache_for_elim_GradNetInner_in_non-first_step
5 years ago
caifubi
0928682655
Profiling support Control Flow
4 years ago
mindspore-ci-bot
1c07ab866b
!15817 Add primal_attr to transfer attribute between forward and backward operator.
From: @liangzhibo
Reviewed-by:
Signed-off-by:
4 years ago
l00591931
befc7a9dea
Add primal_attr to link between forward and backward node attr
5 years ago
zhoufeng
a76d58b52d
hccl decouple
Signed-off-by: zhoufeng <zhoufeng54@huawei.com>
5 years ago
mindspore-ci-bot
f3aaf9b20f
!16103 [MS][LITE] move mindir proto to core
From: @jianghui58
Reviewed-by: @zhoufeng54,@jpc_chenjianping
Signed-off-by: @jpc_chenjianping
5 years ago
jianghui58
d5ecee383a
move mindir proto to core
5 years ago
Ziyan
2a752f24bf
enable not fully use opt shard
5 years ago
zuochuanyong
e7ea343738
add format transform pass on cpu
5 years ago
mindspore-ci-bot
ac912672bf
!15871 Support pass args or|and kwargs for OP CreateInstance.
From: @zh_qh
Reviewed-by: @ginfung,@hwhewei
Signed-off-by: @hwhewei
5 years ago
mindspore-ci-bot
1e7866eb14
!15852 [GraphKernel]Disable GraphKernel in PynativeMode
From: @dayschan
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago
Zhang Qinghua
cd7f7d40fb
Support pass args or|and kwargs for OP CreateInstance.
5 years ago
dayschan
490c0521ac
Disable GraphKernel in PynativeMode
5 years ago
jjfeing
88c92cd263
clear parameter when param_info clone
5 years ago
huangbingjian
302746d6a3
fix code check
5 years ago
mindspore-ci-bot
78469f6083
!15356 Support mem reuse in control flow and multi-call subgraphs
From: @liangzelang
Reviewed-by: @zhoufeng54,@kisnwang
Signed-off-by: @kisnwang
5 years ago
liangzelang
052a803c63
adapt to mem reuse
5 years ago
mindspore-ci-bot
ade65bac93
!15312 add the actor link by auto monad
From: @limingqi107
Reviewed-by: @cristoval,@wilfchen
Signed-off-by: @wilfchen
5 years ago
limingqi107
c937a22bda
add the actor link by auto monad
5 years ago
tronzhang
8ff3c16778
add swtich for parallel fusion and default is off
5 years ago
mindspore-ci-bot
cd002cb7f7
!14893 enable stitch fusion on bert
From: @r1chardf1d0
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago
r1chardf1d0
3b32995936
enable stitch fusion on bert
5 years ago
zhoufeng
f8248d61b9
init plog when opentsd
Signed-off-by: zhoufeng <zhoufeng54@huawei.com>
5 years ago
luopengting
727dc08bfa
fix TAINTED_SCALAR and capitalize constants
1. fix the tainted variable 'path'
2. capitalize constants in env_config_parser
3. move constants about attribute to common utils.h
5 years ago
mindspore-ci-bot
5d96d0f7e9
!14583 3d graph format select reconstruct
From: @liubuyu
Reviewed-by: @kisnwang,@jjfeing
Signed-off-by: @jjfeing
5 years ago
yepei6
ca03a24083
correct the grammar error
5 years ago
liubuyu
40f34b0d90
3d graph reconstruct
5 years ago
mindspore-ci-bot
0ef2d78411
!14133 tensorprint_debug
From: @yepei6
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
75fdaaa6aa
!14304 [GraphKernel] Dump GraphKernel split info as text; dump akg kernel launch fail message
From: @dayschan
Reviewed-by: @gaoxiong1,@gaoxiong1,@anyrenwei
Signed-off-by: @anyrenwei
5 years ago
yepei6
5da7fb36c5
modify the tensorprint handle create process
5 years ago
dayschan
3c6c30024c
dump graph_kernel_split info
5 years ago
mindspore-ci-bot
7f4994af7c
!14186 Support while bprop
From: @liangzelang
Reviewed-by: @kisnwang,@jjfeing
Signed-off-by: @jjfeing
5 years ago
mindspore-ci-bot
e9ada9fd1d
!14192 add the force transform to avoid the utf8 error
From: @yepei6
Reviewed-by: @kingxian,@kisnwang
Signed-off-by: @kingxian
5 years ago
mindspore-ci-bot
ad140a8bf4
!14084 [GraphKernel] support matmul on D
From: @lingyunli63
Reviewed-by:
Signed-off-by:
5 years ago
yepei6
ce3597b727
add the force transform to avoid utf8 error
5 years ago
liangzelang
ba65fb9f3c
Support non-tail recursive graphs
5 years ago
lingyunli63
4b966ed40d
support matmul on D
5 years ago