mindspore-ci-bot
|
ce89cc5e8b
|
!11761 Change GatherV2 to Gather (merge from r1.1 to master)
From: @liangzhibo
Reviewed-by:
Signed-off-by:
|
5 years ago |
liuxiao93
|
68e9be725e
|
split optimizer
|
5 years ago |
mindspore-ci-bot
|
9fa0499fa0
|
Change GatherV2 to Gather r1.1 to master
|
5 years ago |
lizhenyu
|
f17534af08
|
ps cache support sparse
|
5 years ago |
mindspore-ci-bot
|
ca675c0521
|
!11665 [GraphKernel] Add parallel fusion support to master.
From: @tronzhang
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
|
5 years ago |
TFBunny
|
6cd7dc42e9
|
add testcases and dynamic shape to reduce ops
|
5 years ago |
tronzhang
|
d078cbfa99
|
support parallel fusion
|
5 years ago |
yujianfeng
|
266e960acb
|
Not do cse for the nodes set recomputed before recompute pass
|
5 years ago |
mindspore-ci-bot
|
f8f6421459
|
!10968 Add dynamic shape support for the operator Concat
From: @david-he91
Reviewed-by:
Signed-off-by:
|
5 years ago |
weiyang
|
4029b411c9
|
for switch layer
|
5 years ago |
hedongdong
|
8241dfa443
|
Add dynamic shape support for the operator Concat
|
5 years ago |
mindspore-ci-bot
|
2ea8527de3
|
!11314 add cache embedding for wide&deep model
From: @fangzehua
Reviewed-by:
Signed-off-by:
|
5 years ago |
fangzehua
|
f97e19f23f
|
add cache pass
|
5 years ago |
yuchaojie
|
1932d87a26
|
update some op's attr name
|
5 years ago |
yuchaojie
|
b51b3a6764
|
update Pool's attr kernel_size, pad_mode
|
5 years ago |
zhouyuanshen
|
26f6daa850
|
add new op instancenorm2d
|
5 years ago |
mindspore-ci-bot
|
92a85d1061
|
!11075 dynamic op re primitive when infer
From: @liubuyu
Reviewed-by: @kisnwang,@zhoufeng54
Signed-off-by: @zhoufeng54
|
5 years ago |
liubuyu
|
39cc9e70cd
|
dynamic op re primitive when infer
|
5 years ago |
yanzhenxiang2020
|
b8b608f672
|
fix shape of CTCGreedyDecoder
|
5 years ago |
liubuyu
|
119c7010a4
|
insert reformat op
|
5 years ago |
mindspore-ci-bot
|
6d51fc558f
|
!10391 enable loop sink when no getnext in execution orders
From: @laiyongqiang
Reviewed-by:
Signed-off-by:
|
5 years ago |
laiyongqiang
|
d417dddb24
|
enable loop sink when no getnext in execution orders
|
5 years ago |
fangzehua
|
4da4c0fc55
|
add dynamic assign, pad_and_shift kernel
|
5 years ago |
lizhenyu
|
4269dcece5
|
ps cache support save checkpoint
|
5 years ago |
mindspore-ci-bot
|
ffe61081d3
|
!10189 fix shape type error when dynamic_kernel shape type is compute_depend
From: @liubuyu
Reviewed-by: @zhoufeng54,@kisnwang
Signed-off-by: @kisnwang
|
5 years ago |
wilfChen
|
09e10e18bb
|
momentum weightdecay fusion
|
5 years ago |
liubuyu
|
4d75d7b992
|
fix shape type error
|
5 years ago |
mindspore-ci-bot
|
d8a64b4ac4
|
!9796 Add SpaceToDepth fission pass to fix bug when data type is float16.
From: @liu_xiao_93
Reviewed-by: @liangchenghui,@wuxuejian
Signed-off-by: @liangchenghui
|
5 years ago |
liuxiao93
|
2bbd97d334
|
Add SpaceToDepth fission pass.
|
5 years ago |
jjfeing
|
1984cf8e20
|
unify mindir
|
5 years ago |
mindspore-ci-bot
|
be4e91339f
|
!9661 gpu relu optimize
From: @wilfchen
Reviewed-by: @cristoval,@limingqi107
Signed-off-by: @limingqi107
|
5 years ago |
wilfChen
|
c1d3bd2160
|
relu optimize
|
5 years ago |
zhouyuanshen
|
e9aca01620
|
add support to reduceAny and reduceAll on gpu
|
5 years ago |
mindspore-ci-bot
|
32444fbbd5
|
!8870 hccl send receive op
From: @huaweib
Reviewed-by: @jjfeing,@kisnwang
Signed-off-by: @kisnwang
|
5 years ago |
liubuyu
|
e3fa342d72
|
support 3d format
|
5 years ago |
baihuawei
|
7d09dff880
|
add hccl send recv
|
5 years ago |
TFbunny
|
5e19a642f9
|
fix and add testcase for dynamic shape scatteradd/update transpose
|
5 years ago |
mindspore-ci-bot
|
c78683a411
|
!8981 gatherv2 pad optimizer in dynamic shape scene
From: @yao_yf
Reviewed-by: @stsuteng,@kisnwang
Signed-off-by: @stsuteng
|
5 years ago |
yao_yf
|
444cb99b40
|
gather_v2 pad optimizer pass
|
5 years ago |
liuxiao93
|
584e241e29
|
Adapt ops LinSpace for Ascend.
|
5 years ago |
lizhenyu
|
094f0b2a07
|
bugfix:fused batch norm op's input channel nums should be a multiple of 4
|
5 years ago |
fangzehua
|
69ce58425d
|
fix reshape dynamic and emb
|
5 years ago |
LianLiguang
|
bb6148661f
|
change mixedprecision of pynative
|
5 years ago |
liuxiao93
|
d471ac491e
|
Adapt DynamicGRUV2 forward for Ascend new backend.
|
5 years ago |
jjfeing
|
3feffc7d62
|
fix ubfusion bug
|
5 years ago |
mindspore-ci-bot
|
a5b0d13141
|
!8079 support GNMT net fix dynamic rnn grad fission pass
Merge pull request !8079 from liubuyu/op_support
|
5 years ago |
liubuyu
|
662976a75d
|
dynamic rnn fission pass v2
|
5 years ago |
liuxiao93
|
45d343257b
|
Add DynamicGRU.
|
5 years ago |
hwjiaorui
|
3698b9fd54
|
register proximal adagrad ds
import proximal adaagrad ds
tiling map
style check bug
add op set
|
5 years ago |
VectorSL
|
509b25ef1e
|
gpu nhwc
|
5 years ago |