Zichun Ye
a7d89f6686
add graph kernel userdefine op support
fix code check
4 years ago
dayschan
9add26ad99
Add expanders in c++ code
transplant the op expander code from python to c++, base on LiteGraph.
the c++ expander will be called in priority if it was registered in OpExpanderFactory.
add two examples, BiasAdd and ExpandDims.
remove BiasAdd from python expanders.
since the ExpandDims is also imported by other ops (e.g. BatchNorm), we don't remove it now.
4 years ago
chenlei_autodiff
0271535429
[GraphKernel] fix bert and add graph kernel ops.
4 years ago
i-robot
9629b4ebd9
!21071 [Graph Kernel] Add Low Precision Optimition
Merge pull request !21071 from cyun/master_729
4 years ago
cy
4105a247b7
add low precison
4 years ago
Yang Jiao
fd7ab25fc2
reorganiz empty graph
4 years ago
Zichun Ye
22172f18bc
update graph kernel support for argmax/argmin
fix pylint problem
fix conflict
fix op list
fix check warning
fix code based on review comments
update akg commit
fix check warning
4 years ago
Yang Jiao
e2cfc516eb
refctor arithmetic simplify
4 years ago
lingyunli63
50a66ae476
isnan isfinite isinf squaresumall identity oneslik
4 years ago
i-robot
26027b89fe
!19806 add graph kernel erf, erfc, floor, floordiv, mod, floormod, div
Merge pull request !19806 from 杨林枫/basic_graph_kernels
4 years ago
yanglf1121
c30b1e6d06
add graph kernel div, floordiv, mod, floormod, floor
4 years ago
i-robot
7d2a07a2bd
!20480 Add LiteGraph for graphkernel
Merge pull request !20480 from DeshiChen/0618_litegraph
4 years ago
dayschan
137608b518
Add LiteGraph for graphkernel
Add a subdirectory "model" in the "backend/optimizer/graph_kernel" for litegraph.
Implement two interfaces "AnfGraph2LiteGraph" and "LiteGraph2AnfGraph".
The litegraph will be the base data structure when we migrate the GraphKernel code
from python("mindspore/_extends/graph_kernel") to c++.
4 years ago
wangrao124
7cddde47b0
!215 add ops: LogicalNot, LogicalAnd, LogicalOr, NotEqual, EqualCount, Asinh, Acosh
* add ops: LogicalNot, LogicalAnd, LogicalOr, NotEqual, EqualCount, Asinh, Acosh
4 years ago
chenlei_autodiff
7d55cef106
[GraphKernel] add sponge ops.
4 years ago
dayschan
149dab39c5
Add expander for AddN; update akg submodule
4 years ago
lingyunli63
a995bea507
recompute_fuse
4 years ago
zengzitao
8064de7931
fix maximum_grad and minimum_grad input_shape not equal to output_shape bug
4 years ago
i-robot
07f58b0b46
!17626 matmul to mul
Merge pull request !17626 from lingyunli63/matmul_to_mul
4 years ago
looop5
a5cb14261d
recover several graph kernel ci testcases
4 years ago
lingyunli63
4f34e537a0
replace matmul/batchmatmul by mul when k is 1
4 years ago
wenfangpei
fd84c20a6a
bug fix in lamb
4 years ago
mindspore-ci-bot
f91a365564
!16322 [GraphKernel] Enable matmul for gpu
From: @lingyunli63
Reviewed-by:
Signed-off-by:
4 years ago
lingyunli63
afc69b16f7
enable gpu gk MatMul and insert pad/unpad
5 years ago
r1chardf1d0
6609f706fe
update akg && set random seed for stitch case
4 years ago
r1chardf1d0
b1be842d2b
add stitch fusion case in ci
4 years ago
wenfangpei
db8256e61f
adapt for logsoftmax in ascend
4 years ago
mindspore-ci-bot
92355554ed
!15412 [GraphKernel]adapt expanders of some ops from gpu to ascend
From: @wenfangpei
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
4 years ago
wenfangpei
c41875b318
adapt expanders of some ops from gpu to ascend
4 years ago
dingpeifei
d27ad7e797
upgrade_ascend_0428
4 years ago
looop5
e88cdc84ec
enhancement reorder_ops pass to support reordering cast and type insensitive operators
support castup, type-insensitive to type-insensitive, castup
refactor reorder_ops
fix compiling
move reorder_ops pass to later
fix abstract
refactor
fix node input num bug
4 years ago
lingyunli63
c48c2430f0
fuse matmul and elementwise in graphkernel
5 years ago
chenlei_autodiff
13fbfca6b9
[graph kernel] add expander ops.
5 years ago
wenfangpei
66d28af79e
adapt for layernorm in ascend
5 years ago
mindspore-ci-bot
b5bc938deb
!12914 [GraphKernel]expander lamb_apply_weight_assign
From: @wenfangpei
Reviewed-by: @anyrenwei,@gaoxiong1,@gaoxiong1
Signed-off-by: @anyrenwei
5 years ago
mindspore-ci-bot
f324a9a760
!14553 [GraphKernel] refine cast matmul fusion
From: @lingyunli63
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago
lingyunli63
56390330ac
cast_Matmul_fusion, when cast cannot fuse forward
5 years ago
wenfangpei
a4ad6066b1
expander lamb_apply_weight_assign
5 years ago
mindspore-ci-bot
8634675e2d
!14499 [GraphKernel]split UMonad in inputs of op
From: @wenfangpei
Reviewed-by: @dayschan,@ckey_dou,@gaoxiong1
Signed-off-by: @gaoxiong1
5 years ago
wenfangpei
0085a273e7
split UMonad in inputs of op
5 years ago
lingyunli63
8b3823b22c
optimizeMatmul
5 years ago
mindspore-ci-bot
69526df01e
!14314 [GraphKernel] unify graph kernel pass add_atomic_clean on Ascend and GPU back-end
From: @looop5
Reviewed-by: @gaoxiong1,@gaoxiong1,@dylangeng
Signed-off-by: @dylangeng
5 years ago
mindspore-ci-bot
ddf75da542
!14085 [GraphKernel] add some expander ops
From: @chenlei_autodiff
Reviewed-by:
Signed-off-by:
5 years ago
looop5
76d322464d
unify graph kernel pass add_atomic_clean on Ascend and GPU back-end
refactor CanActivateAtomicAdd
use smart pointer
5 years ago
chenlei_autodiff
f4289d40f3
add graph kernel expander ops.
5 years ago
mindspore-ci-bot
7149e8c2c9
!14045 [Graph Kernel] add compare test case
From: @zengzitao
Reviewed-by: @gaoxiong1
Signed-off-by:
5 years ago
zengzitao
72c6dad4ba
add compare_test case in gpu ci and update akg submodule
5 years ago
lingyunli63
4b966ed40d
support matmul on D
5 years ago
mindspore-ci-bot
5b95409022
!13512 add some expander ops
From: @zengzitao
Reviewed-by:
Signed-off-by:
5 years ago
wenfangpei
043a558ae2
expander lamb_apply_optimizer_assign
5 years ago