mindspore-ci-bot
|
72c3317c4f
|
!13787 The input and output of batchnorm reverse operator increase pass in ascend platform under the mode of pynitve
From: @ding_fei_fei
Reviewed-by: @kingxian,@kingxian,@lilongfei15
Signed-off-by: @kingxian,@kingxian
|
5 years ago |
mindspore-ci-bot
|
74f258b3bf
|
!14074 r1.2 Fix float16 support problem and add raises for Batch Dot ops
From: @anrui-wang
Reviewed-by: @c_34,@wuxuejian
Signed-off-by: @wuxuejian
|
5 years ago |
zlq2020
|
40deecd153
|
add test_lenet_quant.py testcase
|
5 years ago |
mindspore-ci-bot
|
496ba58ba3
|
!14104 reduce ut time of cutmix_batch and ConcatOp
From: @luoyang42
Reviewed-by: @liucunwei,@pandoublefeng
Signed-off-by: @liucunwei
|
5 years ago |
YangLuo
|
5b8fae497e
|
reduce ut time of cutmix_batch
|
5 years ago |
mindspore-ci-bot
|
071884fc3f
|
!14032 add atomic clean to clear allreduce input addr
From: @lvchangquan
Reviewed-by: @kisnwang,@chujinjin
Signed-off-by: @chujinjin
|
5 years ago |
mindspore-ci-bot
|
2af439efe8
|
!13959 Enable zero dimension when it comes out of operator.
From: @liangzhibo
Reviewed-by: @zh_qh
Signed-off-by: @zh_qh
|
5 years ago |
w00535372
|
1ae00a01da
|
Bug fix for ISSUE #I3CN9Q
|
5 years ago |
l00591931
|
88e5658587
|
Tensor zero dimension
|
5 years ago |
lvchangquan
|
1531271623
|
use op atomic clean to clear input addr in launch allreduce
|
5 years ago |
dingpeifei
|
8602e97923
|
The input and output of batchnorm reverse operator increase pass in ascend platform under the mode of pynitve
|
5 years ago |
xsmq
|
1e3a20c5c9
|
offline test_lenet_quant
|
5 years ago |
mindspore-ci-bot
|
fe52fb4668
|
!13902 update applyAdagrad:code sync from master to r1.2
From: @zyx5256
Reviewed-by: @wuxuejian,@liangchenghui
Signed-off-by: @wuxuejian
|
5 years ago |
mindspore-ci-bot
|
d7bd79244a
|
!13752 Add check of ResizePreserveARWithFiller operation to r1.2
From: @shenwei41
Reviewed-by:
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
f0bb74c432
|
!13904 check whether communication unit has been inited and fix auto parallel weight init seed setting
From: @yao_yf
Reviewed-by: @stsuteng,@kisnwang
Signed-off-by: @stsuteng
|
5 years ago |
zhuyuxiao
|
fd93ca0dcd
|
adagrad: support ouput on gpu
|
5 years ago |
mindspore-ci-bot
|
24a0bfb6a3
|
!13793 remove control_depend from py file
From: @huangbingjian
Reviewed-by:
Signed-off-by:
|
5 years ago |
yao_yf
|
6523c69b37
|
check whether do init in distributed scene
check_communication_init_and_fix_auto_parallel_set_seed
|
5 years ago |
shenwei41
|
a32a483e97
|
Add check of ResizePreserveARWithFiller on r1.2
|
5 years ago |
huangbingjian
|
5a73a26fee
|
remove control_depend from py file
|
5 years ago |
mindspore-ci-bot
|
12db983888
|
!13759 fix ST failure of resnet thor of r1.2
From: @wangmin0104
Reviewed-by: @kisnwang,@wang_zi_dong
Signed-off-by: @wang_zi_dong
|
5 years ago |
changzherui
|
cef469b6e2
|
modify python ut for r1.2
|
5 years ago |
mindspore-ci-bot
|
efdf9638e5
|
!13778 add glibcxx param
From: @yepei6
Reviewed-by: @zhoufeng54,@kingxian
Signed-off-by: @kingxian
|
5 years ago |
yepei6
|
48d50d97e6
|
add ENABLE_GLIBCXX param
|
5 years ago |
mindspore-ci-bot
|
4cba978b05
|
!13716 fix numpy native cpu ci error on branch 1.2
From: @yanglf1121
Reviewed-by: @guoqi1024,@liangchenghui
Signed-off-by: @liangchenghui
|
5 years ago |
mwang
|
34156d24d5
|
fix thor
|
5 years ago |
mindspore-ci-bot
|
b12b2248e8
|
!13696 show accurate code line when use uninitialized var in for and while
From: @zhangbuxue
Reviewed-by: @zh_qh,@ginfung
Signed-off-by: @zh_qh
|
5 years ago |
yanglf1121
|
f41687bde9
|
fix numpy_native ci error on cpu
|
5 years ago |
mindspore-ci-bot
|
d6fb43e148
|
!13653 Add check to rgbtogray in r1.2
From: @shenwei41
Reviewed-by: @tiancixiao,@liucunwei,@heleiwang
Signed-off-by: @liucunwei
|
5 years ago |
buxue
|
de343a0e00
|
show accurate code line when use uninitialized var in for
(cherry picked from commit e3056ed9b2)
|
5 years ago |
mindspore-ci-bot
|
df4a3cdf22
|
!13619 cpp api modify
From: @zhoufeng54
Reviewed-by:
Signed-off-by:
|
5 years ago |
lixian
|
9722a014c0
|
refactor cpp context, add string tensor, add get tensor by name
|
5 years ago |
mindspore-ci-bot
|
b0f5781477
|
!13613 add CPU LogSoftMax
From: @zhao_ting_v
Reviewed-by: @wuxuejian,@liangchenghui
Signed-off-by: @wuxuejian
|
5 years ago |
shenwei41
|
8b5871ef44
|
Add check to rgbtogray in r1.2
|
5 years ago |
mindspore-ci-bot
|
e557d81c4f
|
!13640 [MD] Fix Canny when the ksize is set to 7
From: @tiancixiao
Reviewed-by: @heleiwang,@liucunwei
Signed-off-by: @liucunwei
|
5 years ago |
zhaoting
|
8754aaeb74
|
add CPU LogSoftMax
|
5 years ago |
mindspore-ci-bot
|
64d9b5169a
|
!13617 dynamic shape bugfix
From: @liubuyu
Reviewed-by: @zhoufeng54,@jjfeing
Signed-off-by: @jjfeing
|
5 years ago |
Xiao Tianci
|
16ee8891ab
|
fix canny when ksize is set to 7
|
5 years ago |
xsmq
|
f98109aa5a
|
adjust performance of smoke bert_thor
|
5 years ago |
liubuyu
|
f303f5ff6e
|
dyanmic shape bug fix
|
5 years ago |
mindspore-ci-bot
|
0451a800bc
|
!13571 Added seed to prevent random failures in ut
From: @ezphlow
Reviewed-by:
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
5b95409022
|
!13512 add some expander ops
From: @zengzitao
Reviewed-by:
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
2fadad0875
|
!13121 expander lamb_apply_optimizer_assign
From: @wenfangpei
Reviewed-by:
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
8e8f3043f9
|
!12115 IR operators of GPU and CPU are unified as batchnorm
From: @ding_fei_fei
Reviewed-by:
Signed-off-by:
|
5 years ago |
mindspore-ci-bot
|
1d505ebad3
|
!13528 Optimize the functions of log module
From: @askmiao
Reviewed-by: @ginfung,@zh_qh,@ginfung
Signed-off-by: @zh_qh
|
5 years ago |
askmiao
|
4e39eab473
|
modify the log module
|
5 years ago |
wenfangpei
|
043a558ae2
|
expander lamb_apply_optimizer_assign
|
5 years ago |
Eric
|
a3c98d9d59
|
Added fix for random core dump
|
5 years ago |
yangwei
|
e34b2873fa
|
set abstract for maketuple
|
5 years ago |
zengzitao
|
d0a656f3cd
|
add some expander ops
|
5 years ago |