Browse Source

!5689 fix bugs of op DepthToSpace, Div, CumSum, LazyAdam and Adam in API

Merge pull request !5689 from lihongkang/lhk_master
tags/v1.0.0
mindspore-ci-bot Gitee 5 years ago
parent
commit
7db9d74557
4 changed files with 7 additions and 5 deletions
  1. +1
    -1
      mindspore/nn/optim/adam.py
  2. +1
    -1
      mindspore/nn/optim/lazyadam.py
  3. +3
    -3
      mindspore/ops/operations/array_ops.py
  4. +2
    -0
      mindspore/ops/operations/math_ops.py

+ 1
- 1
mindspore/nn/optim/adam.py View File

@@ -220,7 +220,7 @@ class Adam(Optimizer):
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01},
>>> {'params': no_conv_params, 'lr': 0.01},
>>> {'order_params': net.trainable_params()}]
>>> optm = nn.Adam(group_params, learning_rate=0.1, weight_decay=0.0)
>>> optim = nn.Adam(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01.
>>> # The no_conv_params's parameters will use learning rate of 0.01 and defaule weight decay of 0.0.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.


+ 1
- 1
mindspore/nn/optim/lazyadam.py View File

@@ -168,7 +168,7 @@ class LazyAdam(Optimizer):
>>> group_params = [{'params': conv_params, 'weight_decay': 0.01},
>>> {'params': no_conv_params, 'lr': 0.01},
>>> {'order_params': net.trainable_params()}]
>>> opt = nn.LazyAdam(group_params, learning_rate=0.1, weight_decay=0.0)
>>> optim = nn.LazyAdam(group_params, learning_rate=0.1, weight_decay=0.0)
>>> # The conv_params's parameters will use default learning rate of 0.1 and weight decay of 0.01.
>>> # The no_conv_params's parameters will use learning rate of 0.01 and default weight decay of 0.0.
>>> # The final parameters order in which the optimizer will be followed is the value of 'order_params'.


+ 3
- 3
mindspore/ops/operations/array_ops.py View File

@@ -3013,12 +3013,12 @@ class DepthToSpace(PrimitiveWithInfer):

This is the reverse operation of SpaceToDepth.

The depth of output tensor is :math:`input\_depth / (block\_size * block\_size)`.

The output tensor's `height` dimension is :math:`height * block\_size`.

The output tensor's `weight` dimension is :math:`weight * block\_size`.

The depth of output tensor is :math:`input\_depth / (block\_size * block\_size)`.

The input tensor's depth must be divisible by `block_size * block_size`.
The data format is "NCHW".

@@ -3029,7 +3029,7 @@ class DepthToSpace(PrimitiveWithInfer):
- **x** (Tensor) - The target tensor. It must be a 4-D tensor.

Outputs:
Tensor, the same type as `x`.
Tensor, has the same shape and dtype as the 'x'.

Examples:
>>> x = Tensor(np.random.rand(1,12,1,1), mindspore.float32)


+ 2
- 0
mindspore/ops/operations/math_ops.py View File

@@ -741,6 +741,7 @@ class CumSum(PrimitiveWithInfer):
Inputs:
- **input** (Tensor) - The input tensor to accumulate.
- **axis** (int) - The axis to accumulate the tensor's value. Only constant value is allowed.
Must be in the range [-rank(input), rank(input)).

Outputs:
Tensor, the shape of the output tensor is consistent with the input tensor's.
@@ -1764,6 +1765,7 @@ class Div(_MathBinaryOp):
>>> input_y = Tensor(np.array([3.0, 2.0, 3.0]), mindspore.float32)
>>> div = P.Div()
>>> div(input_x, input_y)
[-1.3, 2.5, 2.0]
"""

def infer_value(self, x, y):


Loading…
Cancel
Save