mindspore-ci-bot
3f82405d17
!9639 Add Numpy Native to Mindspore
From: @yanglf1121
Reviewed-by:
Signed-off-by:
5 years ago
yanglf1121
59e5afd2bb
Add mindnumpy to mindspore
5 years ago
mindspore-ci-bot
037a121e05
!9691 expand ClipByNormNoDivSum in graph kernel
From: @looop5
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago
dayschan
85b69bf91f
Add a float16 restriction in the solution of reduction op's precision problem in graph splitter.
5 years ago
looop5
fa519433ef
expand ClipByNormNoDivSum
5 years ago
dayschan
297f075dca
Fix precision problem
5 years ago
zhupuxu
4f569677b7
redundant_codes
Signed-off-by: zhupuxu <zhupuxu@huawei.com>
5 years ago
mindspore-ci-bot
7b311f7d2a
!9570 Modifications for GraphKernel
From: @dayschan
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago
dayschan
6be3cc6f0d
consider atomic_add strategy in graph splitter; fixbugs; fuse and inline single op
5 years ago
looop5
848be9b07c
add tile to expand list
add tile expander
add BroadcastTo in model
fix BroadcastTo op calling error and infer shape
rewrite tile expander
not split broadcast_to
add SqrtGrad expander
5 years ago
dayschan
e5306b913d
GraphKernel Fuser
Refactor the BasicOpsFusion and CompositeOpsFusion to one pass.
Add a pass to eliminate the redundant output.
TODO: rename the file basic_ops_fusion and delete the file composite_ops_fusion
5 years ago
lvliang
28e3121fbc
fix-bug-of-null-output-addr-of-tensor-in-valuenode
5 years ago
tronzhang
2190da9946
support atomic clean and change package for akg.
5 years ago
mindspore-ci-bot
45b705bca0
!9129 support abs() and mean() method for Tensor object
From: @bairongz
Reviewed-by:
Signed-off-by:
5 years ago
Bairong
623b2e3f99
support abs and mean of Tensor
5 years ago
lvliang
4515e6d92d
fix-bug-of-moments-precision-error-in-pynative
5 years ago
mindspore-ci-bot
ebef1df00b
!8994 split dropout op and expand dropout
From: @zengzitao
Reviewed-by:
Signed-off-by:
5 years ago
zengzitao
3ef0e9f053
substitute dropout by cudnnuniformreal and dropout
5 years ago
Gaoxiong
e4c3d3e0e9
update graph kernel split model
5 years ago
chenfei
369ee9ef9f
add float64 of mixed_precision_cast
5 years ago
mindspore-ci-bot
232dff3598
!8685 [GraphKernel] For fp16 value, declare fp32 firstly and than cast to fp16 in expander
From: @tronzhang
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
3b946d4eb2
!8678 expand logsoftmax and grad, delete cast in softmax and fix layernorm compute dsl
From: @zengzitao
Reviewed-by: @gaoxiong1,@ryanww
Signed-off-by: @ryanww
5 years ago
tronzhang
80f071e9fa
declare fp32 and than cast to fp16 in expander
5 years ago
tronzhang
9d7494f4df
split shape ops for more fusion pportunity.
5 years ago
zengzitao
266bfa50bf
expand logsoftmax and logsoftmax_grad, delete softmax's cast and fix layernorm op
5 years ago
mindspore-ci-bot
6bdb46c705
!8629 print error info when the function does not meet the indentation standard of AST
From: @zhangbuxue
Reviewed-by: @zh_qh,@chenfei52
Signed-off-by: @zh_qh
5 years ago
mindspore-ci-bot
a321f402c8
!8579 Add view function for tensor
From: @liangzhibo
Reviewed-by: @zh_qh,@chenfei52
Signed-off-by: @zh_qh
5 years ago
buxue
a3937c2863
print error info when the function does not meet the indentation standard of AST
5 years ago
mindspore-ci-bot
f885f6636f
!8564 expand layernorm_grad
From: @zengzitao
Reviewed-by:
Signed-off-by:
5 years ago
l00591931
7a192973ff
Add view function for tensor
5 years ago
zengzitao
326540cbbd
expand layernorm_grad op
5 years ago
mindspore-ci-bot
c11c79170e
!8554 Add expand_as function to tensor
From: @liangzhibo
Reviewed-by: @zh_qh,@chenfei52
Signed-off-by: @zh_qh
5 years ago
l00591931
ba7ee2aa13
add expand_as function
5 years ago
zengzitao
28f1db74dd
expand maximum_grad minimum_grad dropout_grad op
5 years ago
dayschan
195b1fe8d5
Add Transpose into fusible list.
5 years ago
zengzitao
db27783d54
expand tanh_grad and reduce_mean, fix bug and add test_case in ci
5 years ago
zengzitao
53043ae18f
support expand fused_adam and fused_adam_weight_decay op
5 years ago
mindspore-ci-bot
b3855530e3
!7838 Enumerate function enable tensor as input
Merge pull request !7838 from LiangZhibo/master
5 years ago
l00591931
6f165ee5e3
enumerate function and enumerate test case added
5 years ago
zengzitao
5cfa172720
expand gelu and gelugrad op
5 years ago
mindspore-ci-bot
5c4940cdcc
!7892 Convert non-scalar tensor to parameter
Merge pull request !7892 from DeshiChen/1028_nonscalar_tensor_to_input
5 years ago
zengzitao
febdb1850c
expand bias_add and bias_add_grad op
5 years ago
dayschan
b6c2812a29
Convert non-scalar tensor to parameter
Add a pass `tensor_promotion`.
Fix a bug in CreateKernelInfoFromNewParameter, which reset the KernelInfo by mistake.
what's more:
Update akg
Fixbug in model_builder when reduce axis is an interger.
5 years ago
chenzomi
44bf4c3e37
[ME] format code
5 years ago
jjfeing
b9f97b60b0
fix special tbe op compile
5 years ago
caifubi
d3b978147f
Ascend Dynamic Shape
5 years ago
dayschan
7599686a72
GraphKernel supports multi-output kernels
5 years ago
jjfeing
7dda95d247
set soc version
5 years ago
lingyunli63
dd48f10c3d
add assign ops in composite_topi
5 years ago
wenchunjiang
1a407db3fd
increase tbe compile max process number from 16 to 24
5 years ago