Merge pull request !6169 from Simson/push-to-opensourcetags/v1.0.0
| @@ -310,7 +310,7 @@ class ReduceMean(GraphKernel): | |||||
| Args: | Args: | ||||
| keep_dims (bool): If True, keep these reduced dimensions and the length is 1. | keep_dims (bool): If True, keep these reduced dimensions and the length is 1. | ||||
| If False, don't keep these dimensions. Default : False. | |||||
| If False, don't keep these dimensions. Default: False. | |||||
| Inputs: | Inputs: | ||||
| - **input_x** (Tensor[Number]) - The input tensor. | - **input_x** (Tensor[Number]) - The input tensor. | ||||
| @@ -318,13 +318,13 @@ class ReduceMean(GraphKernel): | |||||
| Only constant value is allowed. | Only constant value is allowed. | ||||
| Outputs: | Outputs: | ||||
| Tensor, has the same dtype as the 'input_x'. | |||||
| Tensor, has the same dtype as the `input_x`. | |||||
| - If axis is (), and keep_dims is false, | |||||
| - If axis is (), and keep_dims is False, | |||||
| the output is a 0-D tensor representing the sum of all elements in the input tensor. | the output is a 0-D tensor representing the sum of all elements in the input tensor. | ||||
| - If axis is int, set as 2, and keep_dims is false, | |||||
| - If axis is int, set as 2, and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_3, ..., x_R)`. | the shape of output is :math:`(x_1, x_3, ..., x_R)`. | ||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is false, | |||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_4, ..., x_R)`. | the shape of output is :math:`(x_1, x_4, ..., x_R)`. | ||||
| Examples: | Examples: | ||||
| @@ -394,7 +394,7 @@ class SoftmaxCrossEntropyWithLogits(GraphKernel): | |||||
| - **labels** (Tensor) - Ground truth labels, with shape :math:`(N, C)`. | - **labels** (Tensor) - Ground truth labels, with shape :math:`(N, C)`. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the loss shape is `(N,)`, and the dlogits with the same shape as `logits`. | |||||
| Tuple of 2 Tensors, the loss shape is `(N,)`, and the dlogits with the same shape as `logits`. | |||||
| Examples: | Examples: | ||||
| >>> logits = Tensor([[2, 4, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32) | >>> logits = Tensor([[2, 4, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32) | ||||
| @@ -610,7 +610,7 @@ class LayerNormBetaGammaBackprop(GraphKernel): | |||||
| - **input_gamma**(Tensor) - The fourth input of the forward function of LayerNorm. | - **input_gamma**(Tensor) - The fourth input of the forward function of LayerNorm. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the backprop outputs. | |||||
| Tuple of 2 Tensors, the backprop outputs. | |||||
| - **pd_beta**(Tensor) - The first item of return value of this operator, will be used as | - **pd_beta**(Tensor) - The first item of return value of this operator, will be used as | ||||
| the second item of the LayerNorm's backprop function. | the second item of the LayerNorm's backprop function. | ||||
| @@ -1137,12 +1137,12 @@ class LambNextMV(GraphKernel): | |||||
| - **inputsx3** (Tensor) - The thirteenth input tensor to be computed. | - **inputsx3** (Tensor) - The thirteenth input tensor to be computed. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor. | |||||
| Tuple of 2 Tensors. | |||||
| - **add3** (Tensor) - the shape is the same as the one after broadcasting, and the data type is | - **add3** (Tensor) - the shape is the same as the one after broadcasting, and the data type is | ||||
| the one with high precision or high digits among the inputs. | |||||
| the one with higher precision or higher digits among the inputs. | |||||
| - **realdiv4** (Tensor) - the shape is the same as the one after broadcasting, and the data type is | - **realdiv4** (Tensor) - the shape is the same as the one after broadcasting, and the data type is | ||||
| the one with high precision or high digits among the inputs. | |||||
| the one with higher precision or higher digits among the inputs. | |||||
| Examples: | Examples: | ||||
| >>> lamb_next_mv = LambNextMV() | >>> lamb_next_mv = LambNextMV() | ||||
| @@ -46,13 +46,13 @@ class ReduceLogSumExp(Cell): | |||||
| Only constant value is allowed. | Only constant value is allowed. | ||||
| Outputs: | Outputs: | ||||
| Tensor, has the same dtype as the 'input_x'. | |||||
| Tensor, has the same dtype as the `input_x`. | |||||
| - If axis is (), and keep_dims is false, | |||||
| - If axis is (), and keep_dims is False, | |||||
| the output is a 0-D tensor representing the sum of all elements in the input tensor. | the output is a 0-D tensor representing the sum of all elements in the input tensor. | ||||
| - If axis is int, set as 2, and keep_dims is false, | |||||
| - If axis is int, set as 2, and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_3, ..., x_R)`. | the shape of output is :math:`(x_1, x_3, ..., x_R)`. | ||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is false, | |||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_4, ..., x_R)`. | the shape of output is :math:`(x_1, x_4, ..., x_R)`. | ||||
| Examples: | Examples: | ||||
| @@ -214,7 +214,7 @@ class LGamma(Cell): | |||||
| - **input_x** (Tensor[Number]) - The input tensor. Only float16, float32 are supported. | - **input_x** (Tensor[Number]) - The input tensor. Only float16, float32 are supported. | ||||
| Outputs: | Outputs: | ||||
| Tensor, has the same shape and dtype as the 'input_x'. | |||||
| Tensor, has the same shape and dtype as the `input_x`. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array(2, 3, 4).astype(np.float32)) | >>> input_x = Tensor(np.array(2, 3, 4).astype(np.float32)) | ||||
| @@ -181,8 +181,8 @@ class SameTypeShape(PrimitiveWithInfer): | |||||
| Checks whether data type and shape of two tensors are the same. | Checks whether data type and shape of two tensors are the same. | ||||
| Raises: | Raises: | ||||
| TypeError: If data type not the same. | |||||
| ValueError: If shape of two tensors not the same. | |||||
| TypeError: If the data types of two tensors are not the same. | |||||
| ValueError: If the shapes of two tensors are not the same. | |||||
| Inputs: | Inputs: | ||||
| - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`. | - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`. | ||||
| @@ -362,7 +362,7 @@ class Reshape(PrimitiveWithInfer): | |||||
| Reshapes input tensor with the same values based on a given shape tuple. | Reshapes input tensor with the same values based on a given shape tuple. | ||||
| Raises: | Raises: | ||||
| ValueError: Given a shape tuple, if it has more than one -1; or if the product | |||||
| ValueError: Given a shape tuple, if it has several -1; or if the product | |||||
| of its elements is less than or equal to 0 or cannot be divided by the product | of its elements is less than or equal to 0 or cannot be divided by the product | ||||
| of the input tensor shape; or if it does not match the input's array size. | of the input tensor shape; or if it does not match the input's array size. | ||||
| @@ -671,10 +671,10 @@ class SparseGatherV2(GatherV2): | |||||
| class Padding(PrimitiveWithInfer): | class Padding(PrimitiveWithInfer): | ||||
| """ | """ | ||||
| Extend the last dimension of input tensor from 1 to pad_dim_size, fill with 0. | |||||
| Extend the last dimension of input tensor from 1 to pad_dim_size, by filling with 0. | |||||
| Args: | Args: | ||||
| pad_dim_size (int): The extend value of last dimension of x, must be positive. | |||||
| pad_dim_size (int): The value of the last dimension of x to be extended, which must be positive. | |||||
| Inputs: | Inputs: | ||||
| - **x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`. The rank of x should be at least 2. | - **x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`. The rank of x should be at least 2. | ||||
| @@ -921,7 +921,7 @@ class Fill(PrimitiveWithInfer): | |||||
| class OnesLike(PrimitiveWithInfer): | class OnesLike(PrimitiveWithInfer): | ||||
| """ | """ | ||||
| Creates a new tensor. All elements' value are 1. | |||||
| Creates a new tensor. The values of all elements are 1. | |||||
| Returns a tensor of ones with the same shape and type as the input. | Returns a tensor of ones with the same shape and type as the input. | ||||
| @@ -1025,7 +1025,7 @@ class TupleToArray(PrimitiveWithInfer): | |||||
| class ScalarToArray(PrimitiveWithInfer): | class ScalarToArray(PrimitiveWithInfer): | ||||
| """ | """ | ||||
| Converts scalar to `Tensor`. | |||||
| Converts a scalar to a `Tensor`. | |||||
| Inputs: | Inputs: | ||||
| - **input_x** (Union[int, float]) - The input is a scalar. Only constant value is allowed. | - **input_x** (Union[int, float]) - The input is a scalar. Only constant value is allowed. | ||||
| @@ -1054,7 +1054,7 @@ class ScalarToArray(PrimitiveWithInfer): | |||||
| class ScalarToTensor(PrimitiveWithInfer): | class ScalarToTensor(PrimitiveWithInfer): | ||||
| """ | """ | ||||
| Converts scalar to `Tensor`, and convert data type to specified type. | |||||
| Converts a scalar to a `Tensor`, and convert data type to specified type. | |||||
| Inputs: | Inputs: | ||||
| - **input_x** (Union[int, float]) - The input is a scalar. Only constant value is allowed. | - **input_x** (Union[int, float]) - The input is a scalar. Only constant value is allowed. | ||||
| @@ -1653,11 +1653,11 @@ class ParallelConcat(PrimitiveWithInfer): | |||||
| The input tensors are all required to have size 1 in the first dimension. | The input tensors are all required to have size 1 in the first dimension. | ||||
| Inputs: | Inputs: | ||||
| - **values** (tuple, list) - Tuple or list of input tensors. The data type and shape of these | |||||
| tensors must be same. | |||||
| - **values** (tuple, list) - A tuple or a list of input tensors. The data type and shape of these | |||||
| tensors must be the same. | |||||
| Outputs: | Outputs: | ||||
| Tensor, data type same as `values`. | |||||
| Tensor, data type is the same as `values`. | |||||
| Examples: | Examples: | ||||
| >>> data1 = Tensor(np.array([[0, 1]]).astype(np.int32)) | >>> data1 = Tensor(np.array([[0, 1]]).astype(np.int32)) | ||||
| @@ -1726,7 +1726,7 @@ class Pack(PrimitiveWithInfer): | |||||
| If :math:`0 \le axis`, the shape of the output tensor is :math:`(x_1, x_2, ..., x_{axis}, N, x_{axis+1}, ..., x_R)`. | If :math:`0 \le axis`, the shape of the output tensor is :math:`(x_1, x_2, ..., x_{axis}, N, x_{axis+1}, ..., x_R)`. | ||||
| Args: | Args: | ||||
| axis (int): Dimension along which to pack. Default: 0. | |||||
| axis (int): Dimension to pack. Default: 0. | |||||
| Negative values wrap around. The range is [-(R+1), R+1). | Negative values wrap around. The range is [-(R+1), R+1). | ||||
| Inputs: | Inputs: | ||||
| @@ -1736,8 +1736,8 @@ class Pack(PrimitiveWithInfer): | |||||
| Tensor. A packed Tensor with the same type as `input_x`. | Tensor. A packed Tensor with the same type as `input_x`. | ||||
| Raises: | Raises: | ||||
| TypeError: If the data types of elements in input_x are not the same. | |||||
| ValueError: If length of input_x is not greater than 1; | |||||
| TypeError: If the data types of elements in `input_x` are not the same. | |||||
| ValueError: If the length of `input_x` is not greater than 1; | |||||
| or if axis is out of the range [-(R+1), R+1); | or if axis is out of the range [-(R+1), R+1); | ||||
| or if the shapes of elements in input_x are not the same. | or if the shapes of elements in input_x are not the same. | ||||
| @@ -2267,7 +2267,7 @@ class Diag(PrimitiveWithInfer): | |||||
| - **input_x** (Tensor) - The input tensor. The input shape should be less than 5d. | - **input_x** (Tensor) - The input tensor. The input shape should be less than 5d. | ||||
| Outputs: | Outputs: | ||||
| Tensor, has the same dtype as the 'input_x'. | |||||
| Tensor, has the same dtype as the `input_x`. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor([1, 2, 3, 4]) | >>> input_x = Tensor([1, 2, 3, 4]) | ||||
| @@ -2437,7 +2437,7 @@ class ResizeNearestNeighbor(PrimitiveWithInfer): | |||||
| r""" | r""" | ||||
| Resize the input tensor by using nearest neighbor algorithm. | Resize the input tensor by using nearest neighbor algorithm. | ||||
| Resize input tensor to given size by using nearest neighbor algorithm. The nearest | |||||
| Resize the input tensor to a given size by using the nearest neighbor algorithm. The nearest | |||||
| neighbor algorithm selects the value of the nearest point and does not consider the | neighbor algorithm selects the value of the nearest point and does not consider the | ||||
| values of neighboring points at all, yielding a piecewise-constant interpolant. | values of neighboring points at all, yielding a piecewise-constant interpolant. | ||||
| @@ -2665,8 +2665,8 @@ class ScatterMax(_ScatterOp): | |||||
| Inputs: | Inputs: | ||||
| - **input_x** (Parameter) - The target parameter. | - **input_x** (Parameter) - The target parameter. | ||||
| - **indices** (Tensor) - The index to do max operation whose data type should be mindspore.int32. | - **indices** (Tensor) - The index to do max operation whose data type should be mindspore.int32. | ||||
| - **updates** (Tensor) - The tensor doing the maximum operation with `input_x`, | |||||
| the data type is same as `input_x`, the shape is `indices_shape + x_shape[1:]`. | |||||
| - **updates** (Tensor) - The tensor that performs the maximum operation with `input_x`, | |||||
| the data type is the same as `input_x`, the shape is `indices_shape + x_shape[1:]`. | |||||
| Outputs: | Outputs: | ||||
| Parameter, the updated `input_x`. | Parameter, the updated `input_x`. | ||||
| @@ -2739,8 +2739,8 @@ class ScatterAdd(_ScatterOp): | |||||
| Inputs: | Inputs: | ||||
| - **input_x** (Parameter) - The target parameter. | - **input_x** (Parameter) - The target parameter. | ||||
| - **indices** (Tensor) - The index to do add operation whose data type should be mindspore.int32. | - **indices** (Tensor) - The index to do add operation whose data type should be mindspore.int32. | ||||
| - **updates** (Tensor) - The tensor doing the add operation with `input_x`, | |||||
| the data type is same as `input_x`, the shape is `indices_shape + x_shape[1:]`. | |||||
| - **updates** (Tensor) - The tensor that performs the add operation with `input_x`, | |||||
| the data type is the same as `input_x`, the shape is `indices_shape + x_shape[1:]`. | |||||
| Outputs: | Outputs: | ||||
| Parameter, the updated `input_x`. | Parameter, the updated `input_x`. | ||||
| @@ -2841,8 +2841,8 @@ class ScatterDiv(_ScatterOp): | |||||
| Inputs: | Inputs: | ||||
| - **input_x** (Parameter) - The target parameter. | - **input_x** (Parameter) - The target parameter. | ||||
| - **indices** (Tensor) - The index to do div operation whose data type should be mindspore.int32. | - **indices** (Tensor) - The index to do div operation whose data type should be mindspore.int32. | ||||
| - **updates** (Tensor) - The tensor doing the div operation with `input_x`, | |||||
| the data type is same as `input_x`, the shape is `indices_shape + x_shape[1:]`. | |||||
| - **updates** (Tensor) - The tensor that performs the div operation with `input_x`, | |||||
| the data type is the same as `input_x`, the shape is `indices_shape + x_shape[1:]`. | |||||
| Outputs: | Outputs: | ||||
| Parameter, the updated `input_x`. | Parameter, the updated `input_x`. | ||||
| @@ -3510,12 +3510,12 @@ class ReverseSequence(PrimitiveWithInfer): | |||||
| Reverses variable length slices. | Reverses variable length slices. | ||||
| Args: | Args: | ||||
| seq_dim (int): The dimension along which reversal is performed. Required. | |||||
| batch_dim (int): The input is sliced along this dimmension. Default: 0. | |||||
| seq_dim (int): The dimension where reversal is performed. Required. | |||||
| batch_dim (int): The input is sliced in this dimension. Default: 0. | |||||
| Inputs: | Inputs: | ||||
| - **x** (Tensor) - The input to reverse, support all number types including bool. | |||||
| - **seq_lengths** (Tensor) - Must be 1-D vector with types: int32, int64. | |||||
| - **x** (Tensor) - The input to reverse, supporting all number types including bool. | |||||
| - **seq_lengths** (Tensor) - Must be a 1-D vector with int32 or int64 types. | |||||
| Outputs: | Outputs: | ||||
| Reversed tensor with the same shape and data type as input. | Reversed tensor with the same shape and data type as input. | ||||
| @@ -26,7 +26,7 @@ class ReduceOp: | |||||
| """ | """ | ||||
| Operation options for reduce tensors. | Operation options for reduce tensors. | ||||
| There are four kinds of operation options, "SUM","MAX","MIN","PROD". | |||||
| There are four kinds of operation options, "SUM", "MAX", "MIN", and "PROD". | |||||
| - SUM: Take the sum. | - SUM: Take the sum. | ||||
| - MAX: Take the maximum. | - MAX: Take the maximum. | ||||
| @@ -232,12 +232,12 @@ class ReduceScatter(PrimitiveWithInfer): | |||||
| Args: | Args: | ||||
| op (str): Specifies an operation used for element-wise reductions, | op (str): Specifies an operation used for element-wise reductions, | ||||
| like sum, max, avg. Default: ReduceOp.SUM. | |||||
| like SUM, MAX, AVG. Default: ReduceOp.SUM. | |||||
| group (str): The communication group to work on. Default: "hccl_world_group". | group (str): The communication group to work on. Default: "hccl_world_group". | ||||
| Raises: | Raises: | ||||
| TypeError: If any of operation and group is not a string. | TypeError: If any of operation and group is not a string. | ||||
| ValueError: If the first dimension of input can not be divided by rank size. | |||||
| ValueError: If the first dimension of the input cannot be divided by the rank size. | |||||
| Examples: | Examples: | ||||
| >>> from mindspore import Tensor | >>> from mindspore import Tensor | ||||
| @@ -145,7 +145,7 @@ class Merge(PrimitiveWithInfer): | |||||
| One and only one of the inputs should be selected as the output | One and only one of the inputs should be selected as the output | ||||
| Inputs: | Inputs: | ||||
| - **inputs** (Union(Tuple, List)) - The data to be merged. All tuple elements should have same data type. | |||||
| - **inputs** (Union(Tuple, List)) - The data to be merged. All tuple elements should have the same data type. | |||||
| Outputs: | Outputs: | ||||
| tuple. Output is tuple(`data`, `output_index`). The `data` has the same shape of `inputs` element. | tuple. Output is tuple(`data`, `output_index`). The `data` has the same shape of `inputs` element. | ||||
| @@ -42,7 +42,7 @@ SUMMARY_RETURN_VALUE = {'dtype': mstype.int32, 'shape': [1], 'value': None} | |||||
| class ScalarSummary(PrimitiveWithInfer): | class ScalarSummary(PrimitiveWithInfer): | ||||
| """ | """ | ||||
| Output scalar to protocol buffer through scalar summary operator. | |||||
| Output a scalar to a protocol buffer through a scalar summary operator. | |||||
| Inputs: | Inputs: | ||||
| - **name** (str) - The name of the input variable, it should not be an empty string. | - **name** (str) - The name of the input variable, it should not be an empty string. | ||||
| @@ -28,7 +28,7 @@ class ScalarCast(PrimitiveWithInfer): | |||||
| Inputs: | Inputs: | ||||
| - **input_x** (scalar) - The input scalar. Only constant value is allowed. | - **input_x** (scalar) - The input scalar. Only constant value is allowed. | ||||
| - **input_y** (mindspore.dtype) - The type should cast to be. Only constant value is allowed. | |||||
| - **input_y** (mindspore.dtype) - The type to be cast. Only constant value is allowed. | |||||
| Outputs: | Outputs: | ||||
| Scalar. The type is the same as the python type corresponding to `input_y`. | Scalar. The type is the same as the python type corresponding to `input_y`. | ||||
| @@ -132,7 +132,7 @@ class TensorAdd(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> add = P.TensorAdd() | >>> add = P.TensorAdd() | ||||
| @@ -321,7 +321,7 @@ class ReduceMean(_Reduce): | |||||
| Args: | Args: | ||||
| keep_dims (bool): If True, keep these reduced dimensions and the length is 1. | keep_dims (bool): If True, keep these reduced dimensions and the length is 1. | ||||
| If False, don't keep these dimensions. Default : False. | |||||
| If False, don't keep these dimensions. Default: False. | |||||
| Inputs: | Inputs: | ||||
| - **input_x** (Tensor[Number]) - The input tensor. | - **input_x** (Tensor[Number]) - The input tensor. | ||||
| @@ -329,13 +329,13 @@ class ReduceMean(_Reduce): | |||||
| Only constant value is allowed. | Only constant value is allowed. | ||||
| Outputs: | Outputs: | ||||
| Tensor, has the same dtype as the 'input_x'. | |||||
| Tensor, has the same dtype as the `input_x`. | |||||
| - If axis is (), and keep_dims is false, | |||||
| - If axis is (), and keep_dims is False, | |||||
| the output is a 0-D tensor representing the mean of all elements in the input tensor. | the output is a 0-D tensor representing the mean of all elements in the input tensor. | ||||
| - If axis is int, set as 2, and keep_dims is false, | |||||
| - If axis is int, set as 2, and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_3, ..., x_R)`. | the shape of output is :math:`(x_1, x_3, ..., x_R)`. | ||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is false, | |||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_4, ..., x_R)`. | the shape of output is :math:`(x_1, x_4, ..., x_R)`. | ||||
| Examples: | Examples: | ||||
| @@ -353,7 +353,7 @@ class ReduceSum(_Reduce): | |||||
| Args: | Args: | ||||
| keep_dims (bool): If True, keep these reduced dimensions and the length is 1. | keep_dims (bool): If True, keep these reduced dimensions and the length is 1. | ||||
| If False, don't keep these dimensions. Default : False. | |||||
| If False, don't keep these dimensions. Default: False. | |||||
| Inputs: | Inputs: | ||||
| - **input_x** (Tensor[Number]) - The input tensor. | - **input_x** (Tensor[Number]) - The input tensor. | ||||
| @@ -361,13 +361,13 @@ class ReduceSum(_Reduce): | |||||
| Only constant value is allowed. | Only constant value is allowed. | ||||
| Outputs: | Outputs: | ||||
| Tensor, has the same dtype as the 'input_x'. | |||||
| Tensor, has the same dtype as the `input_x`. | |||||
| - If axis is (), and keep_dims is false, | |||||
| - If axis is (), and keep_dims is False, | |||||
| the output is a 0-D tensor representing the sum of all elements in the input tensor. | the output is a 0-D tensor representing the sum of all elements in the input tensor. | ||||
| - If axis is int, set as 2, and keep_dims is false, | |||||
| - If axis is int, set as 2, and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_3, ..., x_R)`. | the shape of output is :math:`(x_1, x_3, ..., x_R)`. | ||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is false, | |||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_4, ..., x_R)`. | the shape of output is :math:`(x_1, x_4, ..., x_R)`. | ||||
| Examples: | Examples: | ||||
| @@ -402,11 +402,11 @@ class ReduceAll(_Reduce): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the dtype is bool. | Tensor, the dtype is bool. | ||||
| - If axis is (), and keep_dims is false, | |||||
| the output is a 0-D tensor representing the "logical and" of of all elements in the input tensor. | |||||
| - If axis is int, set as 2, and keep_dims is false, | |||||
| and keep_dims is false, the shape of output is :math:`(x_1, x_3, ..., x_R)`. | |||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is false, | |||||
| - If axis is (), and keep_dims is False, | |||||
| the output is a 0-D tensor representing the "logical and" of all elements in the input tensor. | |||||
| - If axis is int, set as 2, and keep_dims is alse, | |||||
| the shape of output is :math:`(x_1, x_3, ..., x_R)`. | |||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_4, ..., x_R)`. | the shape of output is :math:`(x_1, x_4, ..., x_R)`. | ||||
| Examples: | Examples: | ||||
| @@ -421,7 +421,7 @@ class ReduceAll(_Reduce): | |||||
| class ReduceAny(_Reduce): | class ReduceAny(_Reduce): | ||||
| """ | """ | ||||
| Reduce a dimension of a tensor by the "logical or" of all elements in the dimension. | |||||
| Reduce a dimension of a tensor by the "logical OR" of all elements in the dimension. | |||||
| The dtype of the tensor to be reduced is bool. | The dtype of the tensor to be reduced is bool. | ||||
| @@ -438,11 +438,11 @@ class ReduceAny(_Reduce): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the dtype is bool. | Tensor, the dtype is bool. | ||||
| - If axis is (), and keep_dims is false, | |||||
| the output is a 0-D tensor representing the "logical or" of of all elements in the input tensor. | |||||
| - If axis is int, set as 2, and keep_dims is false, | |||||
| and keep_dims is false, the shape of output is :math:`(x_1, x_3, ..., x_R)`. | |||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is false, | |||||
| - If axis is (), and keep_dims is False, | |||||
| the output is a 0-D tensor representing the "logical or" of all elements in the input tensor. | |||||
| - If axis is int, set as 2, and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_3, ..., x_R)`. | |||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_4, ..., x_R)`. | the shape of output is :math:`(x_1, x_4, ..., x_R)`. | ||||
| Examples: | Examples: | ||||
| @@ -472,13 +472,13 @@ class ReduceMax(_Reduce): | |||||
| Only constant value is allowed. | Only constant value is allowed. | ||||
| Outputs: | Outputs: | ||||
| Tensor, has the same dtype as the 'input_x'. | |||||
| Tensor, has the same dtype as the `input_x`. | |||||
| - If axis is (), and keep_dims is false, | |||||
| - If axis is (), and keep_dims is False, | |||||
| the output is a 0-D tensor representing the maximum of all elements in the input tensor. | the output is a 0-D tensor representing the maximum of all elements in the input tensor. | ||||
| - If axis is int, set as 2, and keep_dims is false, | |||||
| - If axis is int, set as 2, and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_3, ..., x_R)`. | the shape of output is :math:`(x_1, x_3, ..., x_R)`. | ||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is false, | |||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_4, ..., x_R)`. | the shape of output is :math:`(x_1, x_4, ..., x_R)`. | ||||
| Examples: | Examples: | ||||
| @@ -511,13 +511,13 @@ class ReduceMin(_Reduce): | |||||
| Only constant value is allowed. | Only constant value is allowed. | ||||
| Outputs: | Outputs: | ||||
| Tensor, has the same dtype as the 'input_x'. | |||||
| Tensor, has the same dtype as the `input_x`. | |||||
| - If axis is (), and keep_dims is false, | |||||
| - If axis is (), and keep_dims is False, | |||||
| the output is a 0-D tensor representing the minimum of all elements in the input tensor. | the output is a 0-D tensor representing the minimum of all elements in the input tensor. | ||||
| - If axis is int, set as 2, and keep_dims is false, | |||||
| - If axis is int, set as 2, and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_3, ..., x_R)`. | the shape of output is :math:`(x_1, x_3, ..., x_R)`. | ||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is false, | |||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_4, ..., x_R)`. | the shape of output is :math:`(x_1, x_4, ..., x_R)`. | ||||
| Examples: | Examples: | ||||
| @@ -544,13 +544,13 @@ class ReduceProd(_Reduce): | |||||
| Only constant value is allowed. | Only constant value is allowed. | ||||
| Outputs: | Outputs: | ||||
| Tensor, has the same dtype as the 'input_x'. | |||||
| Tensor, has the same dtype as the `input_x`. | |||||
| - If axis is (), and keep_dims is false, | |||||
| - If axis is (), and keep_dims is False, | |||||
| the output is a 0-D tensor representing the product of all elements in the input tensor. | the output is a 0-D tensor representing the product of all elements in the input tensor. | ||||
| - If axis is int, set as 2, and keep_dims is false, | |||||
| - If axis is int, set as 2, and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_3, ..., x_R)`. | the shape of output is :math:`(x_1, x_3, ..., x_R)`. | ||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is false, | |||||
| - If axis is tuple(int), set as (2, 3), and keep_dims is False, | |||||
| the shape of output is :math:`(x_1, x_4, ..., x_R)`. | the shape of output is :math:`(x_1, x_4, ..., x_R)`. | ||||
| Examples: | Examples: | ||||
| @@ -574,7 +574,7 @@ class CumProd(PrimitiveWithInfer): | |||||
| Only constant value is allowed. | Only constant value is allowed. | ||||
| Outputs: | Outputs: | ||||
| Tensor, has the same shape and dtype as the 'input_x'. | |||||
| Tensor, has the same shape and dtype as the `input_x`. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([a, b, c]).astype(np.float32)) | >>> input_x = Tensor(np.array([a, b, c]).astype(np.float32)) | ||||
| @@ -1086,7 +1086,7 @@ class Sub(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32) | >>> input_x = Tensor(np.array([1, 2, 3]), mindspore.int32) | ||||
| @@ -1125,7 +1125,7 @@ class Mul(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32) | >>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32) | ||||
| @@ -1165,7 +1165,7 @@ class SquaredDifference(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32) | >>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32) | ||||
| @@ -1351,7 +1351,7 @@ class Pow(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32) | >>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32) | ||||
| @@ -1620,7 +1620,7 @@ class Erfc(PrimitiveWithInfer): | |||||
| class Minimum(_MathBinaryOp): | class Minimum(_MathBinaryOp): | ||||
| """ | """ | ||||
| Computes the element-wise minimum of input tensors. | |||||
| Computes the minimum of input tensors element-wise. | |||||
| Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent. | Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent. | ||||
| The inputs must be two tensors or one tensor and one scalar. | The inputs must be two tensors or one tensor and one scalar. | ||||
| @@ -1637,7 +1637,7 @@ class Minimum(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32) | >>> input_x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32) | ||||
| @@ -1659,7 +1659,7 @@ class Minimum(_MathBinaryOp): | |||||
| class Maximum(_MathBinaryOp): | class Maximum(_MathBinaryOp): | ||||
| """ | """ | ||||
| Computes the element-wise maximum of input tensors. | |||||
| Computes the maximum of input tensors element-wise. | |||||
| Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent. | Inputs of `input_x` and `input_y` comply with the implicit type conversion rules to make the data types consistent. | ||||
| The inputs must be two tensors or one tensor and one scalar. | The inputs must be two tensors or one tensor and one scalar. | ||||
| @@ -1676,7 +1676,7 @@ class Maximum(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32) | >>> input_x = Tensor(np.array([1.0, 5.0, 3.0]), mindspore.float32) | ||||
| @@ -1715,7 +1715,7 @@ class RealDiv(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32) | >>> input_x = Tensor(np.array([1.0, 2.0, 3.0]), mindspore.float32) | ||||
| @@ -1755,7 +1755,7 @@ class Div(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Raises: | Raises: | ||||
| ValueError: When `input_x` and `input_y` do not have the same dtype. | ValueError: When `input_x` and `input_y` do not have the same dtype. | ||||
| @@ -1796,7 +1796,7 @@ class DivNoNan(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Raises: | Raises: | ||||
| ValueError: When `input_x` and `input_y` do not have the same dtype. | ValueError: When `input_x` and `input_y` do not have the same dtype. | ||||
| @@ -1839,7 +1839,7 @@ class FloorDiv(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32) | >>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32) | ||||
| @@ -1870,7 +1870,7 @@ class TruncateDiv(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32) | >>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32) | ||||
| @@ -1900,7 +1900,7 @@ class TruncateMod(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32) | >>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32) | ||||
| @@ -1928,7 +1928,7 @@ class Mod(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Raises: | Raises: | ||||
| ValueError: When `input_x` and `input_y` are not the same dtype. | ValueError: When `input_x` and `input_y` are not the same dtype. | ||||
| @@ -1996,7 +1996,7 @@ class FloorMod(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32) | >>> input_x = Tensor(np.array([2, 4, -1]), mindspore.int32) | ||||
| @@ -2055,7 +2055,7 @@ class Xdivy(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([2, 4, -1]), mindspore.float32) | >>> input_x = Tensor(np.array([2, 4, -1]), mindspore.float32) | ||||
| @@ -2090,7 +2090,7 @@ class Xlogy(_MathBinaryOp): | |||||
| Outputs: | Outputs: | ||||
| Tensor, the shape is the same as the one after broadcasting, | Tensor, the shape is the same as the one after broadcasting, | ||||
| and the data type is the one with high precision or high digits among the two inputs. | |||||
| and the data type is the one with higher precision or higher digits among the two inputs. | |||||
| Examples: | Examples: | ||||
| >>> input_x = Tensor(np.array([-5, 0, 4]), mindspore.float32) | >>> input_x = Tensor(np.array([-5, 0, 4]), mindspore.float32) | ||||
| @@ -2776,7 +2776,7 @@ class NPUGetFloatStatus(PrimitiveWithInfer): | |||||
| Updates the flag which is the output tensor of `NPUAllocFloatStatus` with latest overflow status. | Updates the flag which is the output tensor of `NPUAllocFloatStatus` with latest overflow status. | ||||
| The flag is a tensor whose shape is `(8,)` and data type is `mindspore.dtype.float32`. | The flag is a tensor whose shape is `(8,)` and data type is `mindspore.dtype.float32`. | ||||
| If the sum of the flag equals 0, there is no overflow happened. If the sum of the flag is bigger than 0, there | |||||
| If the sum of the flag equals to 0, there is no overflow happened. If the sum of the flag is bigger than 0, there | |||||
| is overflow happened. | is overflow happened. | ||||
| Inputs: | Inputs: | ||||
| @@ -2989,8 +2989,8 @@ class NMSWithMask(PrimitiveWithInfer): | |||||
| Outputs: | Outputs: | ||||
| tuple[Tensor], tuple of three tensors, they are selected_boxes, selected_idx and selected_mask. | tuple[Tensor], tuple of three tensors, they are selected_boxes, selected_idx and selected_mask. | ||||
| - **selected_boxes** (Tensor) - The shape of tensor is :math:`(N, 5)`. Bounding boxes | |||||
| list after non-max suppression calculation. | |||||
| - **selected_boxes** (Tensor) - The shape of tensor is :math:`(N, 5)`. The list of bounding boxes | |||||
| after non-max suppression calculation. | |||||
| - **selected_idx** (Tensor) - The shape of tensor is :math:`(N,)`. The indexes list of | - **selected_idx** (Tensor) - The shape of tensor is :math:`(N,)`. The indexes list of | ||||
| valid input bounding boxes. | valid input bounding boxes. | ||||
| - **selected_mask** (Tensor) - The shape of tensor is :math:`(N,)`. A mask list of | - **selected_mask** (Tensor) - The shape of tensor is :math:`(N,)`. A mask list of | ||||
| @@ -298,7 +298,7 @@ class ReLU6(PrimitiveWithInfer): | |||||
| It returns :math:`\min(\max(0,x), 6)` element-wise. | It returns :math:`\min(\max(0,x), 6)` element-wise. | ||||
| Inputs: | Inputs: | ||||
| - **input_x** (Tensor) - The input tensor. With float16 or float32 data type. | |||||
| - **input_x** (Tensor) - The input tensor, with float16 or float32 data type. | |||||
| Outputs: | Outputs: | ||||
| Tensor, with the same type and shape as the `input_x`. | Tensor, with the same type and shape as the `input_x`. | ||||
| @@ -1238,7 +1238,7 @@ class MaxPool(_Pool): | |||||
| class MaxPoolWithArgmax(_Pool): | class MaxPoolWithArgmax(_Pool): | ||||
| r""" | r""" | ||||
| Performs max pooling on the input Tensor and return both max values and indices. | |||||
| Perform max pooling on the input Tensor and return both max values and indices. | |||||
| Typically the input is of shape :math:`(N_{in}, C_{in}, H_{in}, W_{in})`, MaxPool outputs | Typically the input is of shape :math:`(N_{in}, C_{in}, H_{in}, W_{in})`, MaxPool outputs | ||||
| regional maximum in the :math:`(H_{in}, W_{in})`-dimension. Given kernel size | regional maximum in the :math:`(H_{in}, W_{in})`-dimension. Given kernel size | ||||
| @@ -1272,7 +1272,7 @@ class MaxPoolWithArgmax(_Pool): | |||||
| Data type should be float16 or float32. | Data type should be float16 or float32. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the maxpool result and where max values from. | |||||
| Tuple of 2 Tensors, representing the maxpool result and where the max values are generated. | |||||
| - **output** (Tensor) - Maxpooling result, with shape :math:`(N, C_{out}, H_{out}, W_{out})`. | - **output** (Tensor) - Maxpooling result, with shape :math:`(N, C_{out}, H_{out}, W_{out})`. | ||||
| - **mask** (Tensor) - Max values' index represented by the mask. | - **mask** (Tensor) - Max values' index represented by the mask. | ||||
| @@ -1557,7 +1557,7 @@ class TopK(PrimitiveWithInfer): | |||||
| - **k** (int) - Number of top elements to be computed along the last dimension, constant input is needed. | - **k** (int) - Number of top elements to be computed along the last dimension, constant input is needed. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the values and the indices. | |||||
| Tuple of 2 Tensors, the values and the indices. | |||||
| - **values** (Tensor) - The `k` largest elements along each last dimensional slice. | - **values** (Tensor) - The `k` largest elements along each last dimensional slice. | ||||
| - **indices** (Tensor) - The indices of values within the last dimension of input. | - **indices** (Tensor) - The indices of values within the last dimension of input. | ||||
| @@ -1609,7 +1609,7 @@ class SoftmaxCrossEntropyWithLogits(PrimitiveWithInfer): | |||||
| - **labels** (Tensor) - Ground truth labels, with shape :math:`(N, C)`, has the same data type with `logits`. | - **labels** (Tensor) - Ground truth labels, with shape :math:`(N, C)`, has the same data type with `logits`. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the loss shape is `(N,)`, and the dlogits with the same shape as `logits`. | |||||
| Tuple of 2 Tensors, the loss shape is `(N,)`, and the dlogits with the same shape as `logits`. | |||||
| Examples: | Examples: | ||||
| >>> logits = Tensor([[2, 4, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32) | >>> logits = Tensor([[2, 4, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32) | ||||
| @@ -1961,7 +1961,7 @@ class SGD(PrimitiveWithInfer): | |||||
| - **accum** (Tensor) - Accum(velocity) to be updated. With float16 or float32 data type. | - **accum** (Tensor) - Accum(velocity) to be updated. With float16 or float32 data type. | ||||
| - **momentum** (Tensor) - Momentum, a scalar tensor with float16 or float32 data type. | - **momentum** (Tensor) - Momentum, a scalar tensor with float16 or float32 data type. | ||||
| e.g. Tensor(0.1, mindspore.float32). | e.g. Tensor(0.1, mindspore.float32). | ||||
| - **stat** (Tensor) - States to be updated with the same shape as gradient. With float16 or float32 data type. | |||||
| - **stat** (Tensor) - States to be updated with the same shape as gradient, with float16 or float32 data type. | |||||
| Outputs: | Outputs: | ||||
| Tensor, parameters to be updated. | Tensor, parameters to be updated. | ||||
| @@ -2397,9 +2397,9 @@ class ResizeBilinear(PrimitiveWithInfer): | |||||
| can be represented by different data types, but the data types of output images are always float32. | can be represented by different data types, but the data types of output images are always float32. | ||||
| Args: | Args: | ||||
| size (tuple[int]): A tuple of 2 int elements `(new_height, new_width)`, the new size for the images. | |||||
| align_corners (bool): If it's true, rescale input by `(new_height - 1) / (height - 1)`, | |||||
| which exactly aligns the 4 corners of images and resized images. If it's false, | |||||
| size (tuple[int]): A tuple of 2 int elements `(new_height, new_width)`, the new size of the images. | |||||
| align_corners (bool): If True, rescale input by `(new_height - 1) / (height - 1)`, | |||||
| which exactly aligns the 4 corners of images and resized images. If False, | |||||
| rescale by `new_height / height`. Default: False. | rescale by `new_height / height`. Default: False. | ||||
| Inputs: | Inputs: | ||||
| @@ -2456,7 +2456,7 @@ class OneHot(PrimitiveWithInfer): | |||||
| Has the same data type with as `on_value`. | Has the same data type with as `on_value`. | ||||
| Outputs: | Outputs: | ||||
| Tensor, one_hot tensor. Tensor of shape :math:`(X_0, \ldots, X_{axis}, \text{depth} ,X_{axis+1}, \ldots, X_n)`. | |||||
| Tensor, one-hot tensor. Tensor of shape :math:`(X_0, \ldots, X_{axis}, \text{depth} ,X_{axis+1}, \ldots, X_n)`. | |||||
| Examples: | Examples: | ||||
| >>> indices = Tensor(np.array([0, 1, 2]), mindspore.int32) | >>> indices = Tensor(np.array([0, 1, 2]), mindspore.int32) | ||||
| @@ -2590,13 +2590,13 @@ class PReLU(PrimitiveWithInfer): | |||||
| Inputs: | Inputs: | ||||
| - **input_x** (Tensor) - Float tensor, representing the output of the preview layer. | - **input_x** (Tensor) - Float tensor, representing the output of the preview layer. | ||||
| With data type of float16 or float32. | With data type of float16 or float32. | ||||
| - **weight** (Tensor) - Float Tensor, w > 0, there is only two shapes are legitimate, | |||||
| 1 or the number of channels at input. With data type of float16 or float32. | |||||
| - **weight** (Tensor) - Float Tensor, w > 0, there are only two shapes are legitimate, | |||||
| 1 or the number of channels of the input. With data type of float16 or float32. | |||||
| Outputs: | Outputs: | ||||
| Tensor, with the same type as `input_x`. | Tensor, with the same type as `input_x`. | ||||
| Detailed information, please refer to `nn.PReLU`. | |||||
| For detailed information, please refer to `nn.PReLU`. | |||||
| Examples: | Examples: | ||||
| >>> import mindspore | >>> import mindspore | ||||
| @@ -2783,7 +2783,7 @@ class Pad(PrimitiveWithInfer): | |||||
| paddings (tuple): The shape of parameter `paddings` is (N, 2). N is the rank of input data. All elements of | paddings (tuple): The shape of parameter `paddings` is (N, 2). N is the rank of input data. All elements of | ||||
| paddings are int type. For the input in `D` th dimension, paddings[D, 0] indicates how many sizes to be | paddings are int type. For the input in `D` th dimension, paddings[D, 0] indicates how many sizes to be | ||||
| extended ahead of the input tensor in the `D` th dimension, and paddings[D, 1] indicates how many sizes to | extended ahead of the input tensor in the `D` th dimension, and paddings[D, 1] indicates how many sizes to | ||||
| be extended behind of the input tensor in the `D` th dimension. | |||||
| be extended behind the input tensor in the `D` th dimension. | |||||
| Inputs: | Inputs: | ||||
| - **input_x** (Tensor) - The input tensor. | - **input_x** (Tensor) - The input tensor. | ||||
| @@ -2833,21 +2833,21 @@ class MirrorPad(PrimitiveWithInfer): | |||||
| Pads the input tensor according to the paddings and mode. | Pads the input tensor according to the paddings and mode. | ||||
| Args: | Args: | ||||
| mode (str): Specifies padding mode. The optional values are "REFLECT", "SYMMETRIC". | |||||
| mode (str): Specifies the padding mode. The optional values are "REFLECT" and "SYMMETRIC". | |||||
| Default: "REFLECT". | Default: "REFLECT". | ||||
| Inputs: | Inputs: | ||||
| - **input_x** (Tensor) - The input tensor. | - **input_x** (Tensor) - The input tensor. | ||||
| - **paddings** (Tensor) - The paddings tensor. The value of `paddings` is a matrix(list), | - **paddings** (Tensor) - The paddings tensor. The value of `paddings` is a matrix(list), | ||||
| and its shape is (N, 2). N is the rank of input data. All elements of paddings | and its shape is (N, 2). N is the rank of input data. All elements of paddings | ||||
| are int type. For the input in `D` th dimension, paddings[D, 0] indicates how many sizes to be | |||||
| are int type. For the input in the `D` th dimension, paddings[D, 0] indicates how many sizes to be | |||||
| extended ahead of the input tensor in the `D` th dimension, and paddings[D, 1] indicates how many sizes to | extended ahead of the input tensor in the `D` th dimension, and paddings[D, 1] indicates how many sizes to | ||||
| be extended behind of the input tensor in the `D` th dimension. | |||||
| be extended behind the input tensor in the `D` th dimension. | |||||
| Outputs: | Outputs: | ||||
| Tensor, the tensor after padding. | Tensor, the tensor after padding. | ||||
| - If `mode` is "REFLECT", it uses a way of symmetrical copying throught the axis of symmetry to fill in. | |||||
| - If `mode` is "REFLECT", it uses a way of symmetrical copying through the axis of symmetry to fill in. | |||||
| If the `input_x` is [[1,2,3],[4,5,6],[7,8,9]] and `paddings` is [[1,1],[2,2]], then the | If the `input_x` is [[1,2,3],[4,5,6],[7,8,9]] and `paddings` is [[1,1],[2,2]], then the | ||||
| Outputs is [[6,5,4,5,6,5,4],[3,2,1,2,3,2,1],[6,5,4,5,6,5,4],[9,8,7,8,9,8,7],[6,5,4,5,6,5,4]]. | Outputs is [[6,5,4,5,6,5,4],[3,2,1,2,3,2,1],[6,5,4,5,6,5,4],[9,8,7,8,9,8,7],[6,5,4,5,6,5,4]]. | ||||
| - If `mode` is "SYMMETRIC", the filling method is similar to the "REFLECT". It is also copied | - If `mode` is "SYMMETRIC", the filling method is similar to the "REFLECT". It is also copied | ||||
| @@ -3929,7 +3929,7 @@ class ApplyAdagrad(PrimitiveWithInfer): | |||||
| With float32 or float16 data type. | With float32 or float16 data type. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the updated parameters. | |||||
| Tuple of 2 Tensors, the updated parameters. | |||||
| - **var** (Tensor) - The same shape and data type as `var`. | - **var** (Tensor) - The same shape and data type as `var`. | ||||
| - **accum** (Tensor) - The same shape and data type as `accum`. | - **accum** (Tensor) - The same shape and data type as `accum`. | ||||
| @@ -4012,7 +4012,7 @@ class ApplyAdagradV2(PrimitiveWithInfer): | |||||
| With float16 or float32 data type. | With float16 or float32 data type. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the updated parameters. | |||||
| Tuple of 2 Tensors, the updated parameters. | |||||
| - **var** (Tensor) - The same shape and data type as `var`. | - **var** (Tensor) - The same shape and data type as `var`. | ||||
| - **accum** (Tensor) - The same shape and data type as `m`. | - **accum** (Tensor) - The same shape and data type as `m`. | ||||
| @@ -4096,7 +4096,7 @@ class SparseApplyAdagrad(PrimitiveWithInfer): | |||||
| The shape of `indices` must be the same as `grad` in first dimension, the type must be int32. | The shape of `indices` must be the same as `grad` in first dimension, the type must be int32. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the updated parameters. | |||||
| Tuple of 2 Tensors, the updated parameters. | |||||
| - **var** (Tensor) - The same shape and data type as `var`. | - **var** (Tensor) - The same shape and data type as `var`. | ||||
| - **accum** (Tensor) - The same shape and data type as `accum`. | - **accum** (Tensor) - The same shape and data type as `accum`. | ||||
| @@ -4183,7 +4183,7 @@ class SparseApplyAdagradV2(PrimitiveWithInfer): | |||||
| The shape of `indices` must be the same as `grad` in first dimension, the type must be int32. | The shape of `indices` must be the same as `grad` in first dimension, the type must be int32. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the updated parameters. | |||||
| Tuple of 2 Tensors, the updated parameters. | |||||
| - **var** (Tensor) - The same shape and data type as `var`. | - **var** (Tensor) - The same shape and data type as `var`. | ||||
| - **accum** (Tensor) - The same shape and data type as `accum`. | - **accum** (Tensor) - The same shape and data type as `accum`. | ||||
| @@ -4273,7 +4273,7 @@ class ApplyProximalAdagrad(PrimitiveWithInfer): | |||||
| - **grad** (Tensor) - Gradient with the same shape and dtype as `var`. | - **grad** (Tensor) - Gradient with the same shape and dtype as `var`. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the updated parameters. | |||||
| Tuple of 2 Tensors, the updated parameters. | |||||
| - **var** (Tensor) - The same shape and data type as `var`. | - **var** (Tensor) - The same shape and data type as `var`. | ||||
| - **accum** (Tensor) - The same shape and data type as `accum`. | - **accum** (Tensor) - The same shape and data type as `accum`. | ||||
| @@ -4377,7 +4377,7 @@ class SparseApplyProximalAdagrad(PrimitiveWithCheck): | |||||
| - **indices** (Tensor) - A vector of indices into the first dimension of `var` and `accum`. | - **indices** (Tensor) - A vector of indices into the first dimension of `var` and `accum`. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the updated parameters. | |||||
| Tuple of 2 Tensors, the updated parameters. | |||||
| - **var** (Tensor) - The same shape and data type as `var`. | - **var** (Tensor) - The same shape and data type as `var`. | ||||
| - **accum** (Tensor) - The same shape and data type as `accum`. | - **accum** (Tensor) - The same shape and data type as `accum`. | ||||
| @@ -4468,7 +4468,7 @@ class ApplyAddSign(PrimitiveWithInfer): | |||||
| - **grad** (Tensor) - A tensor of the same type as `var`, for the gradient. | - **grad** (Tensor) - A tensor of the same type as `var`, for the gradient. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the updated parameters. | |||||
| Tuple of 2 Tensors, the updated parameters. | |||||
| - **var** (Tensor) - The same shape and data type as `var`. | - **var** (Tensor) - The same shape and data type as `var`. | ||||
| - **m** (Tensor) - The same shape and data type as `m`. | - **m** (Tensor) - The same shape and data type as `m`. | ||||
| @@ -4576,7 +4576,7 @@ class ApplyPowerSign(PrimitiveWithInfer): | |||||
| - **grad** (Tensor) - A tensor of the same type as `var`, for the gradient. | - **grad** (Tensor) - A tensor of the same type as `var`, for the gradient. | ||||
| Outputs: | Outputs: | ||||
| Tuple of 2 Tensor, the updated parameters. | |||||
| Tuple of 2 Tensors, the updated parameters. | |||||
| - **var** (Tensor) - The same shape and data type as `var`. | - **var** (Tensor) - The same shape and data type as `var`. | ||||
| - **m** (Tensor) - The same shape and data type as `m`. | - **m** (Tensor) - The same shape and data type as `m`. | ||||
| @@ -5162,7 +5162,7 @@ class ConfusionMulGrad(PrimitiveWithInfer): | |||||
| Default:(), reduce all dimensions. Only constant value is allowed. | Default:(), reduce all dimensions. Only constant value is allowed. | ||||
| keep_dims (bool): | keep_dims (bool): | ||||
| - If true, keep these reduced dimensions and the length as 1. | - If true, keep these reduced dimensions and the length as 1. | ||||
| - If false, don't keep these dimensions. Default:False. | |||||
| - If false, don't keep these dimensions. Default: False. | |||||
| Inputs: | Inputs: | ||||
| - **input_0** (Tensor) - The input Tensor. | - **input_0** (Tensor) - The input Tensor. | ||||
| @@ -5173,11 +5173,11 @@ class ConfusionMulGrad(PrimitiveWithInfer): | |||||
| - **output_0** (Tensor) - The same shape as `input0`. | - **output_0** (Tensor) - The same shape as `input0`. | ||||
| - **output_1** (Tensor) | - **output_1** (Tensor) | ||||
| - If axis is (), and keep_dims is false, the output is a 0-D array representing | |||||
| - If axis is (), and keep_dims is False, the output is a 0-D array representing | |||||
| the sum of all elements in the input array. | the sum of all elements in the input array. | ||||
| - If axis is int, set as 2, and keep_dims is false, | |||||
| - If axis is int, set as 2, and keep_dims is False, | |||||
| the shape of output is :math:`(x_1,x_3,...,x_R)`. | the shape of output is :math:`(x_1,x_3,...,x_R)`. | ||||
| - If axis is tuple(int), set as (2,3), and keep_dims is false, | |||||
| - If axis is tuple(int), set as (2,3), and keep_dims is False, | |||||
| the shape of output is :math:`(x_1,x_4,...x_R)`. | the shape of output is :math:`(x_1,x_4,...x_R)`. | ||||
| Examples: | Examples: | ||||
| @@ -490,7 +490,7 @@ class PopulationCount(PrimitiveWithInfer): | |||||
| - **input** (Tensor) - The data type should be int16 or uint16. | - **input** (Tensor) - The data type should be int16 or uint16. | ||||
| Outputs: | Outputs: | ||||
| Tensor, with shape same as the input. | |||||
| Tensor, with the sam shape as the input. | |||||
| Examples: | Examples: | ||||
| >>> population_count = P.PopulationCount() | >>> population_count = P.PopulationCount() | ||||
| @@ -185,11 +185,11 @@ class Poisson(PrimitiveWithInfer): | |||||
| Inputs: | Inputs: | ||||
| - **shape** (tuple) - The shape of random tensor to be generated. Only constant value is allowed. | - **shape** (tuple) - The shape of random tensor to be generated. Only constant value is allowed. | ||||
| - **mean** (Tensor) - μ parameter the distribution was constructed with. | |||||
| The parameter defines mean number of occurrences of the event. With float32 data type. | |||||
| - **mean** (Tensor) - μ, parameter which the distribution was constructed with. | |||||
| The parameter defines the mean number of occurrences of the event, with float32 data type. | |||||
| Outputs: | Outputs: | ||||
| Tensor. The shape should be the broadcasted shape of Input "shape" and shape of mean. | |||||
| Tensor. Its shape should be the broadcasted shape of `shape` and the shape of `mean`. | |||||
| The dtype is int32. | The dtype is int32. | ||||
| Examples: | Examples: | ||||
| @@ -325,9 +325,10 @@ class UniformReal(PrimitiveWithInfer): | |||||
| class RandomChoiceWithMask(PrimitiveWithInfer): | class RandomChoiceWithMask(PrimitiveWithInfer): | ||||
| """ | """ | ||||
| Generates a random samply as index tensor with a mask tensor from a given tensor. | |||||
| Generates a random sample as index tensor with a mask tensor from a given tensor. | |||||
| The input must be a tensor of rank >= 1. If its rank >= 2, the first dimension specify the number of sample. | |||||
| The input must be a tensor of rank not less than 1. If its rank is greater than or equal to 2, | |||||
| the first dimension specifies the number of samples. | |||||
| The index tensor and the mask tensor have the fixed shapes. The index tensor denotes the index of the nonzero | The index tensor and the mask tensor have the fixed shapes. The index tensor denotes the index of the nonzero | ||||
| sample, while the mask tensor denotes which elements in the index tensor are valid. | sample, while the mask tensor denotes which elements in the index tensor are valid. | ||||
| @@ -337,7 +338,8 @@ class RandomChoiceWithMask(PrimitiveWithInfer): | |||||
| seed2 (int): Random seed2. Default: 0. | seed2 (int): Random seed2. Default: 0. | ||||
| Inputs: | Inputs: | ||||
| - **input_x** (Tensor[bool]) - The input tensor. The input tensor rank should be >= 1 and <= 5. | |||||
| - **input_x** (Tensor[bool]) - The input tensor. | |||||
| The input tensor rank should be greater than or equal to 1 and less than or equal to 5. | |||||
| Outputs: | Outputs: | ||||
| Two tensors, the first one is the index tensor and the other one is the mask tensor. | Two tensors, the first one is the index tensor and the other one is the mask tensor. | ||||
| @@ -374,8 +376,8 @@ class RandomCategorical(PrimitiveWithInfer): | |||||
| Generates random samples from a given categorical distribution tensor. | Generates random samples from a given categorical distribution tensor. | ||||
| Args: | Args: | ||||
| dtype (mindspore.dtype): The type of output. Its value should be one of [mindspore.int16, | |||||
| mindspore.int32, mindspore.int64]. Default: mindspore.int64. | |||||
| dtype (mindspore.dtype): The type of output. Its value should be one of mindspore.int16, | |||||
| mindspore.int32 and mindspore.int64. Default: mindspore.int64. | |||||
| Inputs: | Inputs: | ||||
| - **logits** (Tensor) - The input tensor. 2-D Tensor with shape [batch_size, num_classes]. | - **logits** (Tensor) - The input tensor. 2-D Tensor with shape [batch_size, num_classes]. | ||||
| @@ -437,16 +439,17 @@ class Multinomial(PrimitiveWithInfer): | |||||
| The rows of input do not need to sum to one (in which case we use the values as weights), | The rows of input do not need to sum to one (in which case we use the values as weights), | ||||
| but must be non-negative, finite and have a non-zero sum. | but must be non-negative, finite and have a non-zero sum. | ||||
| Args: | Args: | ||||
| seed (int): Seed data is used as entropy source for Random number engines generating pseudo-random numbers. | |||||
| seed (int): Seed data is used as entropy source for Random number engines to generate pseudo-random numbers. | |||||
| Must be non-negative. Default: 0. | Must be non-negative. Default: 0. | ||||
| replacement(bool): Whether to draw with replacement or not. | replacement(bool): Whether to draw with replacement or not. | ||||
| Inputs: | Inputs: | ||||
| - **input** (Tensor[float32]) - the input tensor containing the cumsum of probabilities, must be 1 or 2 dims. | |||||
| - **input** (Tensor[float32]) - the input tensor containing the cumsum of probabilities, must be 1 or 2 | |||||
| dimensions. | |||||
| - **num_samples** (int32) - number of samples to draw. | - **num_samples** (int32) - number of samples to draw. | ||||
| Outputs: | Outputs: | ||||
| Tensor. have the same rows with input, each row has num_samples sampled indices. | |||||
| Tensor with the same rows as input, each row has num_samples sampled indices. | |||||
| Examples: | Examples: | ||||
| >>> input = Tensor([0., 9., 4., 0.], mstype.float32) | >>> input = Tensor([0., 9., 4., 0.], mstype.float32) | ||||
| @@ -149,7 +149,7 @@ def save_checkpoint(save_obj, ckpt_file_name, integrated_save=True, async_save=F | |||||
| save_obj (nn.Cell or list): The cell object or parameters list(each element is a dictionary, | save_obj (nn.Cell or list): The cell object or parameters list(each element is a dictionary, | ||||
| like {"name": param_name, "data": param_data}.) | like {"name": param_name, "data": param_data}.) | ||||
| ckpt_file_name (str): Checkpoint file name. If the file name already exists, it will be overwritten. | ckpt_file_name (str): Checkpoint file name. If the file name already exists, it will be overwritten. | ||||
| integrated_save (bool): Whether to integrated save in automatic model parallel scene. | |||||
| integrated_save (bool): Whether to integrated save in automatic model parallel scene. Default: True | |||||
| async_save (bool): Whether asynchronous execution saves the checkpoint to a file. Default: False | async_save (bool): Whether asynchronous execution saves the checkpoint to a file. Default: False | ||||
| Raises: | Raises: | ||||