mindspore-ci-bot
d04f3b9a49
!2748 Change order param only equal to group param
Merge pull request !2748 from ghzl/change-order-params-only-equal-to-group-param
5 years ago
guohongzilong
652093642e
change order param same as group params
5 years ago
wangnan39@huawei.com
68bd5cf6a1
add cpu sparse optimizer ops with no return
5 years ago
wangnan39@huawei.com
172728a6a6
support weight decay for sparse optimizer
5 years ago
lilei
12e330cd20
fix group param not list
5 years ago
mindspore-ci-bot
ca1d2436b8
!2286 enable optimizer parallel with broadcast
Merge pull request !2286 from gziyan/optimizer_parallel
5 years ago
mindspore-ci-bot
4c4586ea6f
!2399 fix param KeyError in group params
Merge pull request !2399 from ghzl/fix-params-keyerror-in-group-params
5 years ago
Ziyan
0925e35252
enable optimizer parallel with broadcast
5 years ago
guohongzilong
90639a2a44
fix params KeyError in group params
5 years ago
mindspore-ci-bot
0478b7d191
!2303 optimize LARS interface
Merge pull request !2303 from gziyan/modify_lars_interface
5 years ago
Ziyan
41ddc153a6
modify lars interface
5 years ago
liangzelang
76ba5a643f
fix bug of type cast
5 years ago
gong chen
a6dfa281ea
Init GraphKernel.
- It provides a unified style to express graph and kernel for user.
- It provides a unified IR to represent graph and kernel for developer.
- It breaks the boundary between graph and kernel.
- It provides more opportunities to do compile optimization.
5 years ago
mindspore-ci-bot
60cd188ab8
!2381 fix some type cast bug
Merge pull request !2381 from liangzelang/dev
5 years ago
liangzelang
bbfab3ed7c
fix some type cast bug
5 years ago
panyifeng
3c2057297e
support multi param for tuple grad
5 years ago
lilei
497067d7b2
add sparse proximal ada grad optimizer
5 years ago
guohongzilong
1702bdfc21
change multitpefungraph to internal interface
5 years ago
liuxiao
df63a3195d
fix input value check for SparseApplyFtrl and SparseApplyAdagrad
5 years ago
mindspore-ci-bot
2d84011504
!2071 optimizer support loss scale for sparse situation
Merge pull request !2071 from wangnan39/support_loss_scale_for_sparse_optimizer
5 years ago
mindspore-ci-bot
5aeba82af3
!2112 add warmup_steps param check in AdamWeightDecayDynamicLR optimizer
Merge pull request !2112 from yoonlee666/adam
5 years ago
yoonlee666
799f24b2d1
add warmup_steps param check
5 years ago
mindspore-ci-bot
000e3672ba
!2110 update proximal_ada_grad optimizer learning_rate
Merge pull request !2110 from lilei/add_proximal_ada_grad_optimizer
5 years ago
wangnan39@huawei.com
d4e3d69f37
support loss scale for sparse situation
5 years ago
lilei
7e4bdf6add
proximal_ada_grad optimizer
5 years ago
mindspore-ci-bot
3536185f5b
!2007 add lazy adam optimizer and support sparse adam&ftrl for cpu backend
Merge pull request !2007 from wangnan39/add_lazy_adam_optim_and_support_sparse_admm_for_cpu_backend
5 years ago
wangnan39@huawei.com
4042f16ce4
add lazy adam optim and support sparse adam & ftrl for cpu backend
5 years ago
liuxiao
52790b74e6
Add some description to API about optimizer.
5 years ago
lilei
36d9e353a5
add proximal_ada_grad optimizer
5 years ago
mindspore-ci-bot
5499161531
!1862 fixed validator for ApplyRMSProp,CumProd, CumSum,ReduceProd etc
Merge pull request !1862 from jiangjinsheng/issue_doc
5 years ago
mindspore-ci-bot
10fd781b15
!1831 Add order parameter function in group params
Merge pull request !1831 from ghzl/add-oder-parameters-in-group-functions
5 years ago
mindspore-ci-bot
994b1d83c9
!1890 Fix some bugs for issue.
Merge pull request !1890 from liuxiao/fix-for-issuse
5 years ago
mindspore-ci-bot
eaaf824f18
!1896 fix lars weight decay computation error
Merge pull request !1896 from gziyan/fix_lars_weight_decay
5 years ago
jiangjinsheng
51affc2f1b
fixed validator for CumProd, ReduceProd, ApplyRMSProp
5 years ago
Ziyan
94b78fdf1b
fix_lars_computation_error
5 years ago
liuxiao
26c231b734
fix some bugs for issues.
5 years ago
guohongzilong
85a06b00c6
add order function in group params
5 years ago
mindspore-ci-bot
1c640face9
!1826 fix bug when check learning_rate in AdamWeightDecayDynamicLR
Merge pull request !1826 from wangnan39/fix_lr_check_bug_in_adamweightdecay_dynamic_lr
5 years ago
jiangjinsheng
eb4571a67f
fixed LeakyReLU, Optimizer
5 years ago
wangnan39@huawei.com
c9b7d95c2c
fix lr check bug in AdamWeightDecayDynamicLR
5 years ago
shibeiji
178952afbc
modify adam optimizer and script of bert to match the patterns of fusion rule
5 years ago
mindspore-ci-bot
0426ed057b
!1777 Remove ZerosLikeTensor and sub with ZerosTensor
Merge pull request !1777 from BowenK/master
5 years ago
BowenK
96379faa3a
Remove ZerosLikeTensor and sub with ZerosLike
5 years ago
yoonlee666
994b16c4cc
adjust warmup_steps in AdamWeightDecayDynamicLR
5 years ago
mindspore-ci-bot
e8b14d70c7
!1542 [pynative] fix `MultitypeFuncGraph` and `HyperMap` in pynative mode
Merge pull request !1542 from vlne-v1/I1GZ0B-multitype-funcgraph-bug
5 years ago
Wei Luning
ebf02dd528
fix `MultitypeFuncGraph` and `HyperMap` in pynative mode
5 years ago
mindspore-ci-bot
2a6a3e012c
!1555 fix bug in lamb warmup step check
Merge pull request !1555 from wangnan39/fix_bug_in_check_lamb_warmup_step
5 years ago
wangnan39@huawei.com
810ccf80d8
fix_bug_in_check_lamb_warmup_step
5 years ago
chenhaozhe
f65913d62a
fix performance of bert
5 years ago
guohongzilong
8585b55a65
add check for group parameters
5 years ago