huanghui
|
b4c0ed4b36
|
add signle batchnorm fission pass
|
6 years ago |
huanghui
|
1d65ae598a
|
extract const_to_attr_strided_slice_grad pass
|
6 years ago |
zhaozhenlong
|
30b93ecbf8
|
use reshape as flatten grad
|
6 years ago |
mindspore-ci-bot
|
2fef359c4d
|
!1396 Cancel NoOp optimizer on gpu backend until memory reuse ready
Merge pull request !1396 from chenweifeng/NoOp
|
6 years ago |
wilfChen
|
56d751b5d2
|
Cancel NoOp optimizer in GPU until memory reuse ready
|
6 years ago |
huanghui
|
5a68eba585
|
Refactor LambNextMVWithDecayRule fusion pass
|
6 years ago |
mindspore-ci-bot
|
a915cc3bd9
|
!1225 gpu NoOp optimizer
Merge pull request !1225 from chenweifeng/NoOp
|
6 years ago |
wilfChen
|
23b4b4d106
|
Gpu NoOp optimizer
|
6 years ago |
huanghui
|
d4a82951ed
|
fix confusionmulgrad fusion pass may create a loop
|
6 years ago |
yujianfeng
|
4a0ddaef59
|
Support specifying reshape type for batchnorm fused op
|
6 years ago |
changzherui
|
b323199dc1
|
syn code 430
|
6 years ago |
YuJianfeng
|
dfa66e4d0c
|
Check the empty value tuple when converting it to tuple tensor
|
6 years ago |
YuJianfeng
|
ce2a13fcda
|
Check topk supported before converting input to attr
|
6 years ago |
yanzhenxiang2020
|
691337a6e1
|
add aicpu ops of Reshape/Flatten/Squeeze/ExpandDims/IsFinite
|
6 years ago |
lianliguang
|
00e4306518
|
convert all tuple output to maketuple
|
6 years ago |
chenfei
|
e017fd8916
|
share mem of paramter between child graph
|
6 years ago |
lvliang
|
ffe8b5d3ec
|
pynative-add-op-supported
|
6 years ago |
zhunaipan
|
930a1fb0a8
|
initial version
Signed-off-by: leonwanghui <leon.wanghui@huawei.com>
|
6 years ago |