Browse Source

!3422 modify annotation: wegith_decay modify weight_decay

Merge pull request !3422 from lilei/modify_annotation_0724
tags/v0.7.0-beta
mindspore-ci-bot Gitee 5 years ago
parent
commit
8d0be0ae1a
2 changed files with 2 additions and 2 deletions
  1. +1
    -1
      mindspore/nn/optim/ftrl.py
  2. +1
    -1
      mindspore/nn/optim/proximal_ada_grad.py

+ 1
- 1
mindspore/nn/optim/ftrl.py View File

@@ -144,7 +144,7 @@ class FTRL(Optimizer):
l2 (float): l2 regularization strength, must be greater than or equal to zero. Default: 0.0. l2 (float): l2 regularization strength, must be greater than or equal to zero. Default: 0.0.
use_locking (bool): If True use locks for update operation. Default: False. use_locking (bool): If True use locks for update operation. Default: False.
loss_scale (float): Value for the loss scale. It should be equal to or greater than 1.0. Default: 1.0. loss_scale (float): Value for the loss scale. It should be equal to or greater than 1.0. Default: 1.0.
wegith_decay (float): Weight decay value to multiply weight, should be in range [0.0, 1.0]. Default: 0.0.
weight_decay (float): Weight decay value to multiply weight, should be in range [0.0, 1.0]. Default: 0.0.


Inputs: Inputs:
- **grads** (tuple[Tensor]) - The gradients of `params` in optimizer, the shape is as same as the `params` - **grads** (tuple[Tensor]) - The gradients of `params` in optimizer, the shape is as same as the `params`


+ 1
- 1
mindspore/nn/optim/proximal_ada_grad.py View File

@@ -99,7 +99,7 @@ class ProximalAdagrad(Optimizer):
l2 (float): l2 regularization strength, must be greater than or equal to zero. Default: 0.0. l2 (float): l2 regularization strength, must be greater than or equal to zero. Default: 0.0.
use_locking (bool): If True use locks for update operation. Default: False. use_locking (bool): If True use locks for update operation. Default: False.
loss_scale (float): Value for the loss scale. It should be not less than 1.0. Default: 1.0. loss_scale (float): Value for the loss scale. It should be not less than 1.0. Default: 1.0.
wegith_decay (float): Weight decay value to multiply weight, should be in range [0.0, 1.0]. Default: 0.0.
weight_decay (float): Weight decay value to multiply weight, should be in range [0.0, 1.0]. Default: 0.0.
Inputs: Inputs:
- **grads** (tuple[Tensor]) - The gradients of `params` in optimizer, the shape is as same as the `params` - **grads** (tuple[Tensor]) - The gradients of `params` in optimizer, the shape is as same as the `params`


Loading…
Cancel
Save