Browse Source

delete annotation of decay filter in optimizers

tags/v0.6.0-beta
wangnan39@huawei.com 5 years ago
parent
commit
2b182633e9
2 changed files with 0 additions and 4 deletions
  1. +0
    -2
      mindspore/nn/optim/adam.py
  2. +0
    -2
      mindspore/nn/optim/lamb.py

+ 0
- 2
mindspore/nn/optim/adam.py View File

@@ -398,8 +398,6 @@ class AdamWeightDecay(Optimizer):
eps (float): Term added to the denominator to improve numerical stability. Default: 1e-6.
Should be greater than 0.
weight_decay (float): Weight decay (L2 penalty). It should be in range [0.0, 1.0]. Default: 0.0.
decay_filter (Function): A function to determine whether to apply weight decay on parameters. Default:
lambda x: 'LayerNorm' not in x.name and 'bias' not in x.name.

Inputs:
- **gradients** (tuple[Tensor]) - The gradients of `params`, the shape is the same as `params`.


+ 0
- 2
mindspore/nn/optim/lamb.py View File

@@ -228,8 +228,6 @@ class Lamb(Optimizer):
eps (float): Term added to the denominator to improve numerical stability. Default: 1e-6.
Should be greater than 0.
weight_decay (float): Weight decay (L2 penalty). Default: 0.0. Should be in range [0.0, 1.0].
decay_filter (Function): A function to determine whether to apply weight decay on parameters. Default:
lambda x: 'LayerNorm' not in x.name and 'bias' not in x.name.

Inputs:
- **gradients** (tuple[Tensor]) - The gradients of `params`, the shape is the same as `params`.


Loading…
Cancel
Save