|
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143 |
- mindspore.nn.Optimizer
- ======================
-
- .. py:class:: mindspore.nn.Optimizer(learning_rate, parameters, weight_decay=0.0, loss_scale=1.0)
-
- ڲµŻࡣҪֱʹ࣬ʵһࡣ
-
- Żֲ֧顣ʱÿòͬѧϰʣ`lr` Ȩ˥`weight_decay`ݶĻ`grad_centralization`ԡ
-
- .. note::
- ڲδʱŻõ `weight_decay` Ӧƺ"beta""gamma"ͨɵȨ˥ԡʱÿ `weight_decay` δãʹŻõ `weight_decay`
-
- ****
-
- - **learning_rate (Union[float, int, Tensor, Iterable, LearningRateSchedule]):
-
- - **float** - ̶ѧϰʡڵ㡣
- - **int** - ̶ѧϰʡڵ㡣ͻᱻתΪ
- - **Tensor** - DZһάǹ̶ѧϰʡһάǶ̬ѧϰʣiȡеiֵΪѧϰʡ
- - **Iterable** - ̬ѧϰʡiȡiֵΪѧϰʡ
- - **LearningRateSchedule** - ̬ѧϰʡѵУŻʹòstepΪ룬 `LearningRateSchedule` ʵ㵱ǰѧϰʡ
-
- - **parameters (Union[list[Parameter], list[dict]])** - `Parameter` ɵбֵɵббԪֵʱֵļ"params""lr""weight_decay""grad_centralization""order_params"
-
- - **params** - ǰȨأֵ `Parameter` б
- - **lr** - ѡд"lr"ʹöӦֵΪѧϰʡûУʹŻõ `learning_rate` Ϊѧϰʡ
- - **weight_decay** - ѡд"weight_decayʹöӦֵΪȨ˥ֵûУʹŻõ `weight_decay` ΪȨ˥ֵ
- - **grad_centralization** - ѡд"grad_centralization"ʹöӦֵֵΪ͡ûУΪ `grad_centralization` ΪFalseòھ㡣
- - **order_params** - ѡӦֵԤڵIJ˳ʹò鹦ʱͨʹø `parameters` ˳ܡд"order_params"Ըе"order_params"еIJijһ `params` С
-
- - **weight_decay** (Union[float, int]) - Ȩ˥ֵڻ0 `weight_decay` תΪĬֵ0.0
- - **loss_scale** (float) - ݶϵ0 `loss_scale` תΪͨʹĬֵѵʱʹ `FixedLossScaleManager` `FixedLossScaleManager ` `drop_overflow_update` ΪFalseʱֵҪ `FixedLossScaleManager` е `loss_scale` ͬйظϸϢclass`mindspore.FixedLossScaleManager`Ĭֵ1.0
-
- **쳣**
-
- - **TypeError** - `learning_rate` intfloatTensorIterableLearningRateSchedule
- - **TypeError** - `parameters` ԪزParameterֵ䡣
- - **TypeError** - `loss_scale` float
- - **TypeError** - `weight_decay` floatint
- - **ValueError** - `loss_scale` Сڻ0
- - **ValueError** - `weight_decay` С0
- - **ValueError** - `learning_rate` һTensorTensorάȴ1
-
- **֧ƽ̨**
-
- ``Ascend`` ``GPU`` ``CPU``
-
- .. py:method:: broadcast_params(optim_result)
-
- ˳в㲥
-
- ****
-
- **optim_result** (bool) - ½֤ɺִв㲥
-
- **أ**
-
- bool״̬־
-
- .. py:method:: decay_weight(gradients)
-
- ˥Ȩء
-
- һּѧϰģϵķ̳ :class:`mindspore.nn.Optimizer` ԶŻʱɵøýӿڽȨ˥
-
- ****
-
- **gradients** (tuple[Tensor]) - ݶȣ״shapeͬ
-
- **أ**
-
- tuple[Tensor]˥Ȩغݶȡ
-
- .. py:method:: get_lr()
-
- Żøýӿڻȡǰ裨stepѧϰʡ̳ :class:`mindspore.nn.Optimizer` ԶŻʱڲǰøýӿڻȡѧϰʡ
-
- **أ**
-
- floatǰѧϰʡ
-
- .. py:method:: get_lr_parameter(param)
-
- ʹ鹦ܣΪͬòͬѧϰʱȡָѧϰʡ
-
- ****
-
- **param** (Union[Parameter, list[Parameter]]) - `Parameter` `Parameter` б
-
- **أ**
-
- Parameter `Parameter` `Parameter` бʹ˶̬ѧϰʣڼѧϰʵ `LearningRateSchedule` `LearningRateSchedule` б
-
- ****
-
- >>> from mindspore import nn
- >>> net = Net()
- >>> conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))
- >>> no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))
- >>> group_params = [{'params': conv_params, 'lr': 0.05},
- ... {'params': no_conv_params, 'lr': 0.01}]
- >>> optim = nn.Momentum(group_params, learning_rate=0.1, momentum=0.9, weight_decay=0.0)
- >>> conv_lr = optim.get_lr_parameter(conv_params)
- >>> print(conv_lr[0].asnumpy())
- 0.05
-
- .. py:method:: gradients_centralization(gradients)
-
- ݶĻ
-
- һŻѧϰģѵٶȵķ̳ :class:`mindspore.nn.Optimizer` ԶŻʱɵøýӿڽݶĻ
-
- ****
-
- **gradients** (tuple[Tensor]) - ݶȣ״shapeͬ
-
- **أ**
-
- tuple[Tensor]ݶĻݶȡ
-
- .. py:method:: scale_grad(gradients)
-
- ڻϾȳԭݶȡ
-
- ̳ :class:`mindspore.nn.Optimizer` ԶŻʱɵøýӿڻԭݶȡ
-
- ****
-
- **gradients** (tuple[Tensor]) - ݶȣ״shapeͬ
-
- **أ**
-
- tuple[Tensor]ԭݶȡ
-
- .. py:method:: target
- :property:
-
- ָhostϻ豸deviceϸ²Ϊstrֻ'CPU''Ascend''GPU'
-
- .. py:method:: unique
- :property:
-
- ԱʾǷŻнݶȥأͨϡ硣ݶϡΪTrueǰϡѶȨȥأݶdzܵģΪFalseδʱĬֵΪTrue
|