Browse Source

!5940 Repair bug of loss_scale_manager and opt.

Merge pull request !5940 from liuyang/loss_scale_opt
tags/v1.0.0
mindspore-ci-bot Gitee 5 years ago
parent
commit
d7e2a31623
1 changed files with 3 additions and 1 deletions
  1. +3
    -1
      mindspore/train/model.py

+ 3
- 1
mindspore/train/model.py View File

@@ -80,7 +80,7 @@ class Model:
O2 is recommended on GPU, O3 is recommended on Ascend.

loss_scale_manager (Union[None, LossScaleManager]): If it is None, the loss would not be scaled. Otherwise,
scale the loss by LossScaleManager. It is a key argument.
scale the loss by LossScaleManager and optimizer can not be None.It is a key argument.
e.g. Use `loss_scale_manager=None` to set the value.
keep_batchnorm_fp32 (bool): Keep Batchnorm running in `float32`. If it is set to true, the level setting before
will be overwritten. Default: True.
@@ -148,6 +148,8 @@ class Model:
def _build_train_network(self):
"""Build train network"""
network = self._network
if self._loss_scale_manager_set and not self._optimizer:
raise ValueError("Optimizer can not be None when set loss_scale_manager.")
if self._optimizer:
if self._loss_scale_manager_set:
network = amp.build_train_network(network,


Loading…
Cancel
Save