mindspore-ci-bot
|
74ca49529a
|
!14732 Add L2NormalizeGrad for CPU
From: @he-botao
Reviewed-by: @wuxuejian,@liangchenghui
Signed-off-by:
|
4 years ago |
mindspore-ci-bot
|
789c28d4d8
|
!14728 add l2normalize ops for cpu
From: @yuanwei66
Reviewed-by:
Signed-off-by:
|
4 years ago |
shibeiji
|
5cfab3b1a8
|
fix code review alarms
|
4 years ago |
yuanwei66
|
33870dc46d
|
add l2normalize ops for cpu
|
4 years ago |
hebotao
|
a7b445c50c
|
Add L2NormalizeGradCPUKernel
Add L2NormalizeGradCPUKernel 修改测试用例误差判断方式
Add L2NormalizeGradCPUKernel 注释调试信息
Add L2NormalizeGradCPUKernel 修改反向算子的计算公式
Add L2NormalizeGradCPUKernel 去掉 float16 的注册
Add L2NormalizeGradCPUKernel 使用相对误差
Add L2NormalizeGradCPUKernel 更新反向公式
Add L2NormalizeGradCPUKernel 删除调试信息
清除告警
Add L2NormalizeGradCPUKernel 添加测试用例
Add L2NormalizeGradCPUKernel 删除多余的函数
Add L2NormalizeGradCPUKernel 修改注释中的时间
Add L2NormalizeGradCPUKernel 格式化代码
Add L2NormalizeGradCPUKernel 格式化代码,修改 cpplint 问题
Add L2NormalizeGradCPUKernel 修改 cpplint,pylint 问题
Add L2NormalizeGradCPUKernel 修改求导函数,与 GPU 和 Ascend 保持一致。
修改后的公式在数学意义上有问题,但已经和武雪剑对齐,认为没有影响,没有必要要求 GPU 和 Ascend 修改代码。
Add L2NormalizeGradCPUKernel 精简测试用例
|
4 years ago |
mindspore-ci-bot
|
7ae11f91e6
|
!14791 add ascend lstm test case
From: @woshixiaoli
Reviewed-by: @liangchenghui,@chujinjin
Signed-off-by: @liangchenghui
|
4 years ago |
mindspore-ci-bot
|
5b4685c5ea
|
!14604 [GraphKernel] add some expander ops
From: @chenlei_autodiff
Reviewed-by:
Signed-off-by:
|
4 years ago |
mindspore-ci-bot
|
c907c95da5
|
!14849 fix codedex and bot
From: @fangzehua
Reviewed-by: @wuxuejian,@liangchenghui
Signed-off-by: @wuxuejian
|
4 years ago |
chenlei_autodiff
|
13fbfca6b9
|
[graph kernel] add expander ops.
|
4 years ago |
mindspore-ci-bot
|
0920239699
|
!13475 [GraphKernel]adapt for layernorm in ascend
From: @wenfangpei
Reviewed-by: @gaoxiong1,@anyrenwei
Signed-off-by: @anyrenwei
|
4 years ago |
fangzehua
|
742aa799b0
|
fix codedex and bot
|
4 years ago |
wenfangpei
|
66d28af79e
|
adapt for layernorm in ascend
|
4 years ago |
woshixiaoli
|
3f633348e2
|
add lstm test case
|
4 years ago |
majianwei
|
16932e468e
|
Completion of test cases
|
4 years ago |
mindspore-ci-bot
|
b5bc938deb
|
!12914 [GraphKernel]expander lamb_apply_weight_assign
From: @wenfangpei
Reviewed-by: @anyrenwei,@gaoxiong1,@gaoxiong1
Signed-off-by: @anyrenwei
|
4 years ago |
mindspore-ci-bot
|
f324a9a760
|
!14553 [GraphKernel] refine cast matmul fusion
From: @lingyunli63
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
|
4 years ago |
lingyunli63
|
56390330ac
|
cast_Matmul_fusion, when cast cannot fuse forward
|
4 years ago |
wenfangpei
|
a4ad6066b1
|
expander lamb_apply_weight_assign
|
4 years ago |
mindspore-ci-bot
|
8634675e2d
|
!14499 [GraphKernel]split UMonad in inputs of op
From: @wenfangpei
Reviewed-by: @dayschan,@ckey_dou,@gaoxiong1
Signed-off-by: @gaoxiong1
|
4 years ago |
wenfangpei
|
0085a273e7
|
split UMonad in inputs of op
|
4 years ago |
mindspore-ci-bot
|
18d79d35b6
|
!14498 [GraphKernel]remove redundant cast bias for matmul
From: @lingyunli63
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
|
4 years ago |
lingyunli63
|
8b3823b22c
|
optimizeMatmul
|
4 years ago |
mindspore-ci-bot
|
b8e35c663f
|
!14508 add float64 support to SigmoidCrossEntropyWithLogits gpu
From: @TFbunny
Reviewed-by: @tom__chen,@robingrosman
Signed-off-by: @robingrosman
|
4 years ago |
mindspore-ci-bot
|
efb53fb9c0
|
!14183 Support SparseTensorDenseMatmul for CPU
From: @xuguoyang5566
Reviewed-by:
Signed-off-by:
|
4 years ago |
TFBunny
|
4de6b25d23
|
add float64 support to SigmoidCrossEntropyWithLogits and Grad
|
4 years ago |
xuguoyang
|
7df6bfe7dd
|
support sparse tensor dense matmul fot CPU
|
4 years ago |
mindspore-ci-bot
|
69526df01e
|
!14314 [GraphKernel] unify graph kernel pass add_atomic_clean on Ascend and GPU back-end
From: @looop5
Reviewed-by: @gaoxiong1,@gaoxiong1,@dylangeng
Signed-off-by: @dylangeng
|
4 years ago |
mindspore-ci-bot
|
ddf75da542
|
!14085 [GraphKernel] add some expander ops
From: @chenlei_autodiff
Reviewed-by:
Signed-off-by:
|
4 years ago |
looop5
|
76d322464d
|
unify graph kernel pass add_atomic_clean on Ascend and GPU back-end
refactor CanActivateAtomicAdd
use smart pointer
|
4 years ago |
chenlei_autodiff
|
f4289d40f3
|
add graph kernel expander ops.
|
4 years ago |
mindspore-ci-bot
|
7149e8c2c9
|
!14045 [Graph Kernel] add compare test case
From: @zengzitao
Reviewed-by: @gaoxiong1
Signed-off-by:
|
4 years ago |
zengzitao
|
72c6dad4ba
|
add compare_test case in gpu ci and update akg submodule
|
4 years ago |
mindspore-ci-bot
|
ad140a8bf4
|
!14084 [GraphKernel] support matmul on D
From: @lingyunli63
Reviewed-by:
Signed-off-by:
|
4 years ago |
lingyunli63
|
4b966ed40d
|
support matmul on D
|
4 years ago |
mindspore-ci-bot
|
f1fb0d9f3a
|
!13833 Add SparseToDense
From: @ZhengQihao3f3f3f
Reviewed-by:
Signed-off-by:
|
4 years ago |
mindspore-ci-bot
|
d2ecf71ace
|
!13693 Slice op only support input_x int32 and float32 at CPU backend
From: @wangyanling10
Reviewed-by:
Signed-off-by:
|
4 years ago |
zhengqihao
|
27f508760b
|
Add SparseToDense op
|
4 years ago |
w00535372
|
761a2e2127
|
Bug fix for ISSUE #I3CN9Q
|
4 years ago |
wangyanling
|
fb64e14265
|
fix slice op bug
|
4 years ago |
zhuyuxiao
|
a11287c332
|
adagrad: support ouput on gpu
|
4 years ago |
mindspore-ci-bot
|
38b48fb0e8
|
!13608 Add CPU LogSoftMax
From: @zhao_ting_v
Reviewed-by: @wuxuejian
Signed-off-by: @wuxuejian
|
4 years ago |
mindspore-ci-bot
|
5504116718
|
!13647 BroadcastTo add general -1 dim behavior
From: @peilin-wang
Reviewed-by: @liangchenghui,@wuxuejian
Signed-off-by: @liangchenghui
|
4 years ago |
zhaoting
|
8754aaeb74
|
add CPU LogSoftMax
|
4 years ago |
mindspore-ci-bot
|
7bb35d8ce4
|
!13635 add float64 support to Addn gpu
From: @TFbunny
Reviewed-by: @liangchenghui,@wuxuejian
Signed-off-by: @liangchenghui
|
4 years ago |
Peilin Wang
|
6cead43bdf
|
add general -1 dim behavior for BroadcastTo op
|
4 years ago |
wangyanling
|
6268f660fb
|
add cpu broadcast_to op
|
4 years ago |
TFBunny
|
b780e5737c
|
add float64 to Addn gpu
|
4 years ago |
mindspore-ci-bot
|
5b95409022
|
!13512 add some expander ops
From: @zengzitao
Reviewed-by:
Signed-off-by:
|
4 years ago |
mindspore-ci-bot
|
2fadad0875
|
!13121 expander lamb_apply_optimizer_assign
From: @wenfangpei
Reviewed-by:
Signed-off-by:
|
4 years ago |
mindspore-ci-bot
|
8e8f3043f9
|
!12115 IR operators of GPU and CPU are unified as batchnorm
From: @ding_fei_fei
Reviewed-by:
Signed-off-by:
|
4 years ago |