Browse Source

Fix weight decay bug for `SGD` optimizer.

Weight decay has been considered and caculated in `SGD` operation, so
there's no need to apply weight decay in `SGD` optimizer.
tags/v0.3.0-alpha
seatea 6 years ago
parent
commit
5a119d107f
1 changed files with 0 additions and 1 deletions
  1. +0
    -1
      mindspore/nn/optim/sgd.py

+ 0
- 1
mindspore/nn/optim/sgd.py View File

@@ -136,7 +136,6 @@ class SGD(Optimizer):
params = self.parameters
accum = self.accum
stat = self.stat
gradients = self.decay_weight(gradients)
gradients = self.scale_grad(gradients)
lr = self.get_lr()
if self.is_group_lr:


Loading…
Cancel
Save