mindspore-ci-bot
|
dda51309c2
|
!13022 add parallel for some CPU ops
From: @zhao_ting_v
Reviewed-by: @wuxuejian,@wuxuejian,@liangchenghui
Signed-off-by: @wuxuejian,@liangchenghui
|
5 years ago |
mindspore-ci-bot
|
33f4c464b7
|
!13097 update sponge module and fix some bugs
From: @zhangxinfeng3
Reviewed-by: @wang_zi_dong,@ljl0711
Signed-off-by: @ljl0711
|
5 years ago |
mindspore-ci-bot
|
54c37bcd61
|
!12947 Add MaxPool3D,MaxPool3DGrad,MaxPool3DGradGrad ops for Ascend.
From: @liu_xiao_93
Reviewed-by: @liangchenghui
Signed-off-by: @liangchenghui
|
5 years ago |
mindspore-ci-bot
|
9add98350e
|
!13111 Fix conv2d_grad_input ops infer strides.
From: @linqingke
Reviewed-by: @xu-yfei,@jjfeing
Signed-off-by: @xu-yfei,@jjfeing
|
5 years ago |
mindspore-ci-bot
|
c69142fdc1
|
!12968 update reshape type for 3d nodes
From: @liubuyu
Reviewed-by:
Signed-off-by:
|
5 years ago |
linqingke
|
07b50f76ab
|
fix conv2d_grad_inpu infer strides.
|
5 years ago |
mindspore-ci-bot
|
54fc5e0d2b
|
!12234 [GraphKernel] Support pipeline optimization for parallel fusion.
From: @tronzhang
Reviewed-by:
Signed-off-by:
|
5 years ago |
zhaoting
|
c62baec9a4
|
add parallel for some CPU ops
|
5 years ago |
liuxiao93
|
35f6ba9011
|
Add MaxPool3D,MaxPool3DGrad,MaxPool3DGradGrad ops for Ascend.
|
5 years ago |
zhangxinfeng3
|
cdbe50af9e
|
update sponge
|
5 years ago |
mindspore-ci-bot
|
4fcf7a50fa
|
!13072 fix assign cpu output
From: @fangzehua
Reviewed-by: @kisnwang,@zhoufeng54
Signed-off-by: @zhoufeng54
|
5 years ago |
fangzehua
|
14a01d327f
|
fix assign cpu output
|
5 years ago |
mindspore-ci-bot
|
7583b258df
|
!13032 Add modules of Sponge
From: @zhangxinfeng3
Reviewed-by: @ljl0711,@wang_zi_dong
Signed-off-by: @wang_zi_dong
|
5 years ago |
liubuyu
|
518818fbef
|
reshape type for 3d nodes
|
5 years ago |
mindspore-ci-bot
|
a85990cbf2
|
!13037 remove static worker
From: @anancds
Reviewed-by: @limingqi107,@cristoval
Signed-off-by: @cristoval
|
5 years ago |
mindspore-ci-bot
|
8de7fbccd7
|
!13001 Auto_tune add sync fusion env.
From: @linqingke
Reviewed-by: @jjfeing,@xu-yfei
Signed-off-by: @xu-yfei
|
5 years ago |
tronzhang
|
7252ffb66b
|
pipeline optimization for parallel fusion
|
5 years ago |
chendongsheng
|
d29f2b2634
|
remove static worker
|
5 years ago |
zhangxinfeng3
|
769243673a
|
Add some modules of Sponge
|
5 years ago |
linqingke
|
977295162e
|
auto_tune add sync fusion env.
|
5 years ago |
mindspore-ci-bot
|
70024d3ab1
|
!12189 Add CPU Pad op
From: @wanyiming
Reviewed-by:
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
a855cb2d24
|
!12706 graph kernel parallel building in gpu
From: @wenfangpei
Reviewed-by:
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
fa4c19f938
|
!13002 3d format bug fix
From: @liubuyu
Reviewed-by: @zhoufeng54,@kisnwang
Signed-off-by: @kisnwang
|
5 years ago |
liubuyu
|
62aa7d0e87
|
bug fix for 3d format
|
5 years ago |
mindspore-ci-bot
|
802e756c9b
|
!12897 Add float64 support to reducemax grad gpu op
From: @peilin-wang
Reviewed-by: @liangchenghui,@wuxuejian
Signed-off-by: @liangchenghui
|
5 years ago |
mindspore-ci-bot
|
52cbab202a
|
!12946 Using cache at the same time with fusion ops.
From: @linqingke
Reviewed-by:
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
8b733dccaa
|
!12922 biasadd report error when input 5d with nchw format at cpu
From: @wangyanling10
Reviewed-by:
Signed-off-by:
|
5 years ago |
wanyiming
|
dbc0ad13db
|
cpu_pad_op
|
5 years ago |
linqingke
|
91727124ef
|
Set same ops cache.
|
5 years ago |
wangyanling
|
e6693ea89d
|
biasadd operator report error when input is not 2d or 4d
|
5 years ago |
Peilin Wang
|
dd72f44b27
|
tensor_scatter_update new op quick initial commit
fix ci
fix ci
fix ci
fix ci
|
5 years ago |
mindspore-ci-bot
|
9ed9d950e2
|
!11446 fix argmax op for cpu and gpu
From: @xcnick
Reviewed-by:
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
6a028cd863
|
!12869 auto tune step two tune process
From: @laiyongqiang
Reviewed-by:
Signed-off-by:
|
5 years ago |
laiyongqiang
|
f16cea00a4
|
auto tune step two tune process
|
5 years ago |
mindspore-ci-bot
|
9c1e73a5b9
|
!12845 Add float64 type support to Greater and Less
From: @peilin-wang
Reviewed-by:
Signed-off-by:
|
5 years ago |
liubuyu
|
701b181015
|
bug fix graph id
|
5 years ago |
Peilin Wang
|
783c57c209
|
add float64 support to reducemax grad
fix ci
|
5 years ago |
mindspore-ci-bot
|
87b71c1831
|
!12767 Add float64 support to gpu gather* grad ops
From: @peilin-wang
Reviewed-by:
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
504f45566b
|
!12844 Add float64 support to Absgrad and SqrtGrad
From: @peilin-wang
Reviewed-by: @tom__chen,@robingrosman
Signed-off-by: @robingrosman
|
5 years ago |
wenfangpei
|
d6b3a07b4a
|
parallel build gpu ops about graph kernel
|
5 years ago |
mindspore-ci-bot
|
ab8b7b8bef
|
!12769 gpu support tensor-rt kernel
From: @wilfchen
Reviewed-by: @cristoval,@limingqi107
Signed-off-by: @limingqi107
|
5 years ago |
xcnick
|
d65a5affba
|
fix cpu/gpu argmax op
|
5 years ago |
wilfChen
|
d7f5e7a571
|
gpu support tensor-rt kernel
|
5 years ago |
mindspore-ci-bot
|
4365c332e6
|
!12813 unify AvgPoolGrad's MindIR
From: @yuchaojie
Reviewed-by: @kisnwang
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
c529cfa427
|
!12754 auto tune step one construct json
From: @liubuyu
Reviewed-by:
Signed-off-by:
|
5 years ago |
yuchaojie
|
d2cb3aa1c2
|
unify AvgPoolGrad
|
5 years ago |
Peilin Wang
|
98d2c94700
|
add float64 support to greater and less
|
5 years ago |
Peilin Wang
|
a0645c41fe
|
add float64 support to absgrad and sqrtgrad
|
5 years ago |
Peilin Wang
|
1e93aaceeb
|
add float64 support to gather grad and gatherd grad
add float64 support to scatterNd for GatherNd grad
fix typo
left out a file
|
5 years ago |
mindspore-ci-bot
|
e58be0de71
|
!12737 quantum state evolution operator
From: @donghufeng
Reviewed-by:
Signed-off-by:
|
5 years ago |