Browse Source

!13014 add raises of MatMul, Maximum, Minimum, etc.

From: @mind-lh
Reviewed-by: @liangchenghui,@wuxuejian
Signed-off-by: @liangchenghui
tags/v1.2.0-rc1
mindspore-ci-bot Gitee 4 years ago
parent
commit
b73974c02b
12 changed files with 620 additions and 41 deletions
  1. +1
    -1
      mindspore/nn/layer/activation.py
  2. +5
    -0
      mindspore/ops/composite/array_ops.py
  3. +4
    -0
      mindspore/ops/composite/base.py
  4. +4
    -0
      mindspore/ops/composite/math_ops.py
  5. +10
    -10
      mindspore/ops/operations/comm_ops.py
  6. +7
    -0
      mindspore/ops/operations/control_ops.py
  7. +31
    -0
      mindspore/ops/operations/debug_ops.py
  8. +15
    -0
      mindspore/ops/operations/inner_ops.py
  9. +123
    -7
      mindspore/ops/operations/math_ops.py
  10. +335
    -20
      mindspore/ops/operations/nn_ops.py
  11. +27
    -3
      mindspore/ops/operations/other_ops.py
  12. +58
    -0
      mindspore/ops/operations/random_ops.py

+ 1
- 1
mindspore/nn/layer/activation.py View File

@@ -69,7 +69,7 @@ class Softmax(Cell):
Tensor, which has the same type and shape as `x` with values in the range[0,1].

Raises:
TypeError: If `axis` is neither an int not a tuple.
TypeError: If `axis` is neither an int nor a tuple.
TypeError: If dtype of `x` is neither float16 nor float32.
ValueError: If `axis` is a tuple whose length is less than 1.
ValueError: If `axis` is a tuple whose elements are not all in range [-len(x), len(x)).


+ 5
- 0
mindspore/ops/composite/array_ops.py View File

@@ -135,6 +135,11 @@ def sequence_mask(lengths, maxlen=None):
Outputs:
One mask tensor of shape lengths.shape + (maxlen,).

Raises:
TypeError: If `lengths` is not a Tensor.
TypeError: If `maxlen` is not an int.
TypeError: If dtype of `lengths` is neither int32 nor int64.

Supported Platforms:
``GPU``



+ 4
- 0
mindspore/ops/composite/base.py View File

@@ -488,6 +488,10 @@ class HyperMap(HyperMap_):
Sequence or nested sequence, the sequence of output after applying the function.
e.g. `operation(args[0][i], args[1][i])`.

Raises:
TypeError: If `ops` is neither MultitypeFuncGraph nor None.
TypeError: If `args` is not a Tuple.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``



+ 4
- 0
mindspore/ops/composite/math_ops.py View File

@@ -229,6 +229,10 @@ def tensor_dot(x1, x2, axes):
Tensor, the shape of the output tensor is :math:`(N + M)`. Where :math:`N` and :math:`M` are the free axes not
contracted in both inputs

Raises:
TypeError: If `x1` or `x2` is not a Tensor.
TypeError: If `axes` is not one of the following: int, tuple, list.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``



+ 10
- 10
mindspore/ops/operations/comm_ops.py View File

@@ -64,11 +64,6 @@ class AllReduce(PrimitiveWithInfer):
like sum, max, and min. Default: ReduceOp.SUM.
group (str): The communication group to work on. Default: "hccl_world_group".

Raises:
TypeError: If any of operation and group is not a string,
or fusion is not an integer, or the input's dtype is bool.
ValueError: If the operation is "prod".

Inputs:
- **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.

@@ -76,6 +71,11 @@ class AllReduce(PrimitiveWithInfer):
Tensor, has the same shape of the input, i.e., :math:`(x_1, x_2, ..., x_R)`.
The contents depend on the specified operation.

Raises:
TypeError: If any of `op` and `group` is not a str,
or fusion is not an integer, or the input's dtype is bool.
ValueError: If the `op` is "prod".

Supported Platforms:
``Ascend`` ``GPU``

@@ -133,11 +133,6 @@ class AllGather(PrimitiveWithInfer):
Args:
group (str): The communication group to work on. Default: "hccl_world_group".

Raises:
TypeError: If group is not a string.
ValueError: If the local rank id of the calling process in the group
is larger than the group's rank size.

Inputs:
- **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.

@@ -145,6 +140,11 @@ class AllGather(PrimitiveWithInfer):
Tensor. If the number of devices in the group is N,
then the shape of output is :math:`(N, x_1, x_2, ..., x_R)`.

Raises:
TypeError: If `group` is not a str.
ValueError: If the local rank id of the calling process in the group
is larger than the group's rank size.

Supported Platforms:
``Ascend`` ``GPU``



+ 7
- 0
mindspore/ops/operations/control_ops.py View File

@@ -101,6 +101,10 @@ class GeSwitch(PrimitiveWithInfer):
tuple. Output is tuple(false_output, true_output). The Elements in the tuple has the same shape of input data.
The false_output connects with the false_branch and the true_output connects with the true_branch.

Raises:
TypeError: If `data` is neither a Tensor nor a Number.
TypeError: If `pred` is not a Tensor.

Examples:
>>> class Net(nn.Cell):
... def __init__(self):
@@ -159,6 +163,9 @@ class Merge(PrimitiveWithInfer):
Outputs:
tuple. Output is tuple(`data`, `output_index`). The `data` has the same shape of `inputs` element.

Raises:
TypeError: If `inputs` is neither Tuple nor list.

Examples:
>>> merge = ops.Merge()
>>> input_x = Tensor(np.linspace(0, 8, 8).reshape(2, 4), mindspore.float32)


+ 31
- 0
mindspore/ops/operations/debug_ops.py View File

@@ -58,6 +58,10 @@ class ScalarSummary(PrimitiveWithInfer):
- **name** (str) - The name of the input variable, it must not be an empty string.
- **value** (Tensor) - The value of scalar, and the shape of value must be [] or [1].

Raises:
TypeError: If `name` is not a str.
TypeError: If `value` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -101,6 +105,10 @@ class ImageSummary(PrimitiveWithInfer):
- **name** (str) - The name of the input variable, it must not be an empty string.
- **value** (Tensor) - The value of image, the rank of tensor must be 4.

Raises:
TypeError: If `name` is not a str.
TypeError: If `value` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -143,6 +151,10 @@ class TensorSummary(PrimitiveWithInfer):
- **name** (str) - The name of the input variable.
- **value** (Tensor) - The value of tensor, and the rank of tensor must be greater than 0.

Raises:
TypeError: If `name` is not a str.
TypeError: If `value` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -186,6 +198,10 @@ class HistogramSummary(PrimitiveWithInfer):
- **name** (str) - The name of the input variable.
- **value** (Tensor) - The value of tensor, and the rank of tensor must be greater than 0.

Raises:
TypeError: If `name` is not a str.
TypeError: If `value` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -234,6 +250,9 @@ class InsertGradientOf(PrimitiveWithInfer):
Outputs:
Tensor, returns `input_x` directly. `InsertGradientOf` does not affect the forward result.

Raises:
TypeError: If `f` is not a function of mindspore.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -299,6 +318,10 @@ class HookBackward(PrimitiveWithInfer):
Inputs:
- **inputs** (Tensor) - The variable to hook.

Raises:
TypeError: If `inputs` are not a Tensor.
TypeError: If `hook_fn` is not a function of python.

Examples:
>>> def hook_fn(grad_out):
... print(grad_out)
@@ -351,6 +374,9 @@ class Print(PrimitiveWithInfer):
- **input_x** (Union[Tensor, bool, int, float, str, tuple, list]) - The graph node to attach to.
Supports multiple inputs which are separated by ','.

Raises:
TypeError: If `input_x` is not one of the following: Tensor, bool, int, float, str, tuple, list.

Supported Platforms:
``Ascend`` ``GPU``

@@ -410,6 +436,11 @@ class Assert(PrimitiveWithInfer):
- **condition** [Union[Tensor[bool], bool]] - The condition to evaluate.
- **input_data** (Union(tuple[Tensor], list[Tensor])) - The tensors to print out when condition is false.

Raises:
TypeError: If `summarize` is not an int.
TypeError: If `condition` is neither a Tensor nor a bool.
TypeError: If `input_data` is neither a tuple nor a list.

Examples:
>>> class AssertDemo(nn.Cell):
... def __init__(self):


+ 15
- 0
mindspore/ops/operations/inner_ops.py View File

@@ -34,6 +34,9 @@ class ScalarCast(PrimitiveWithInfer):
Outputs:
Scalar. The type is the same as the python type corresponding to `input_y`.

Raises:
TypeError: If neither `input_x` nor `input_y` is a constant value.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -80,6 +83,11 @@ class Randperm(PrimitiveWithInfer):
Outputs:
- **output** (Tensor) - The output Tensor with shape: (`max_length`,) and type: `dtype`.

Raises:
TypeError: If neither `max_length` nor `pad` is an int.
TypeError: If `n` is not a Tensor.
TypeError: If `n` has non-Int elements.

Supported Platforms:
``Ascend``

@@ -137,6 +145,10 @@ class NoRepeatNGram(PrimitiveWithInfer):
Outputs:
- **log_probs** (Tensor) - The output Tensor with same shape and type as original `log_probs`.

Raises:
TypeError: If `ngram_size` is not an int.
TypeError: If neither `state_seq` nor `log_probs` is a Tensor.

Supported Platforms:
``Ascend``

@@ -324,6 +336,9 @@ class MakeRefKey(Primitive):
Outputs:
RefKeyType, made from the Parameter name.

Raises:
TypeError: If `tag` is not a str.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``



+ 123
- 7
mindspore/ops/operations/math_ops.py View File

@@ -140,6 +140,9 @@ class Add(_MathBinaryOp):
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
TypeError: If neither `input_x` nor `input_y` is one of the following: Tensor, Number, bool.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -397,6 +400,11 @@ class ReduceMean(_Reduce):
- If axis is tuple(int), set as (2, 3), and keep_dims is False,
the shape of output is :math:`(x_1, x_4, ..., x_R)`.

Raises:
TypeError: If `keep_dims` is not a bool.
TypeError: If `input_x` is not a Tensor.
ValueError: If `axis` is not one of the following: int, tuple or list.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -435,6 +443,11 @@ class ReduceSum(_Reduce):
- If axis is tuple(int), set as (2, 3), and keep_dims is False,
the shape of output is :math:`(x_1, x_4, ..., x_R)`.

Raises:
TypeError: If `keep_dims` is not a bool.
TypeError: If `input_x` is not a Tensor.
ValueError: If `axis` is None.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -479,6 +492,11 @@ class ReduceAll(_Reduce):
- If axis is tuple(int), set as (2, 3), and keep_dims is False,
the shape of output is :math:`(x_1, x_4, ..., x_R)`.

Raises:
TypeError: If `keep_dims` is not a bool.
TypeError: If `input_x` is not a Tensor.
ValueError: If `axis` is not one of the following: int, tuple or list.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -521,6 +539,11 @@ class ReduceAny(_Reduce):
- If axis is tuple(int), set as (2, 3), and keep_dims is False,
the shape of output is :math:`(x_1, x_4, ..., x_R)`.

Raises:
TypeError: If `keep_dims` is not a bool.
TypeError: If `input_x` is not a Tensor.
ValueError: If `axis` is not one of the following: int, tuple or list.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -563,6 +586,11 @@ class ReduceMax(_Reduce):
- If axis is tuple(int), set as (2, 3), and keep_dims is False,
the shape of output is :math:`(x_1, x_4, ..., x_R)`.

Raises:
TypeError: If `keep_dims` is not a bool.
TypeError: If `input_x` is not a Tensor.
ValueError: If `axis` is not one of the following: int, tuple or list.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -611,6 +639,11 @@ class ReduceMin(_Reduce):
- If axis is tuple(int), set as (2, 3), and keep_dims is False,
the shape of output is :math:`(x_1, x_4, ..., x_R)`.

Raises:
TypeError: If `keep_dims` is not a bool.
TypeError: If `input_x` is not a Tensor.
ValueError: If `axis` is not one of the following: int, tuple or list.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -650,6 +683,11 @@ class ReduceProd(_Reduce):
- If axis is tuple(int), set as (2, 3), and keep_dims is False,
the shape of output is :math:`(x_1, x_4, ..., x_R)`.

Raises:
TypeError: If `keep_dims` is not a bool.
TypeError: If `input_x` is not a Tensor.
ValueError: If `axis` is not one of the following: int, tuple or list.

Supported Platforms:
``Ascend``

@@ -1108,6 +1146,9 @@ class Neg(PrimitiveWithInfer):
Outputs:
Tensor, has the same shape and dtype as input.

Raises:
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -1295,6 +1336,9 @@ class Sub(_MathBinaryOp):
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
TypeError: If neither `input_x` nor `input_y` is a Number or a bool or a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -1342,6 +1386,10 @@ class Mul(_MathBinaryOp):
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
TypeError: If neither `input_x` nor `input_y` is one of the following: Tensor, Number, bool.
ValueError: If `input_x` and `input_y` are not the same shape.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -1385,6 +1433,9 @@ class SquaredDifference(_MathBinaryOp):
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
TypeError: if neither `input_x` nor `input_y` is a Number or a bool or a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -1412,6 +1463,9 @@ class Square(PrimitiveWithCheck):
Outputs:
Tensor, has the same shape and dtype as the `input_x`.

Raises:
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -1451,6 +1505,9 @@ class Rsqrt(PrimitiveWithInfer):
Outputs:
Tensor, has the same type and shape as `input_x`.

Raises:
TypeError: If dtype of `input_x` is neither float16 nor float32.

Supported Platforms:
``Ascend`` ``GPU``

@@ -1494,6 +1551,9 @@ class Sqrt(PrimitiveWithCheck):
Outputs:
Tensor, has the same shape as the `input_x`.

Raises:
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -1532,6 +1592,9 @@ class Reciprocal(PrimitiveWithInfer):
Outputs:
Tensor, has the same shape as the `input_x`.

Raises:
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -1589,6 +1652,10 @@ class Pow(_MathBinaryOp):
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
TypeError: If neither `input_x` nor `input_y` is one of the following: Tensor, Number, bool.
ValueError: If `input_x` and `input_y` are not the same shape.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -1945,6 +2012,10 @@ class Minimum(_MathBinaryOp):
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
TypeError: If neither `input_x` nor `input_y` is one of the following: Tensor, Number, bool.
ValueError: If `input_x` and `input_y` are not the same shape.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -1988,6 +2059,10 @@ class Maximum(_MathBinaryOp):
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
TypeError: If neither `input_x` nor `input_y` is one of the following: Tensor, Number, bool.
ValueError: If `input_x` and `input_y` are not the same shape.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -2031,6 +2106,10 @@ class RealDiv(_MathBinaryOp):
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
TypeError: If neither `input_x` nor `input_y` is one of the following: Tensor, Number, bool.
ValueError: If `input_x` and `input_y` are not the same shape.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -2266,6 +2345,9 @@ class TruncateDiv(_MathBinaryOp):
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
TypeError: If neither `input_x` nor `input_y` is one of the following: Tensor, Number, bool.

Supported Platforms:
``Ascend``

@@ -2300,6 +2382,9 @@ class TruncateMod(_MathBinaryOp):
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
TypeError: If neither `input_x` nor `input_y` is one of the following: Tensor, Number, bool.

Supported Platforms:
``Ascend``

@@ -2493,6 +2578,9 @@ class Xdivy(_MathBinaryOp):
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
TypeError: If neither `input_x` nor `input_y` is one of the following: Tensor, Number, bool.

Supported Platforms:
``Ascend``

@@ -2532,6 +2620,9 @@ class Xlogy(_MathBinaryOp):
Tensor, the shape is the same as the one after broadcasting,
and the data type is the one with higher precision or higher digits among the two inputs.

Raises:
TypeError: If neither `input_x` nor `input_y` is one of the following: Tensor, Number, bool.

Supported Platforms:
``Ascend``

@@ -2676,6 +2767,9 @@ class Sinh(PrimitiveWithInfer):
Outputs:
Tensor, has the same shape as `input_x`.

Raises:
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``CPU``

@@ -2880,6 +2974,10 @@ class NotEqual(_LogicBinaryOp):
Outputs:
Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Raises:
TypeError: If neither `input_x` nor `input_y` is one of the following: Tensor, Number, bool.
TypeError: If neither `input_x` nor `input_y` is a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -3011,7 +3109,7 @@ class Less(_LogicBinaryOp):
Tensor, the shape is the same as the one after broadcasting,and the data type is bool.

Raises:
TypeError: If neither `input_x` nor `input_y` is a Tensor.
TypeError: If neither `input_x` nor `input_y` is one of the following: Tensor, Number, bool.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@@ -3408,6 +3506,10 @@ class NPUGetFloatStatus(PrimitiveWithInfer):
Outputs:
Tensor, has the same shape as `input_x`. All the elements in the tensor will be zero.

Raises:
TypeError: If `input_x` is not a Tensor.
TypeError: If dtype of `input_x` is neither float16 nor float32.

Supported Platforms:
``Ascend``

@@ -3573,6 +3675,9 @@ class Sin(PrimitiveWithInfer):
Outputs:
Tensor, has the same shape as `input_x`.

Raises:
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -3644,11 +3749,6 @@ class NMSWithMask(PrimitiveWithInfer):
iou_threshold (float): Specifies the threshold of overlap boxes with respect to
IOU. Default: 0.5.

Raises:
ValueError: If the iou_threshold is not a float number, or if the first dimension
of input Tensor is less than or equal to 0, or if the data type of the input
Tensor is not float16 or float32.

Inputs:
- **bboxes** (Tensor) - The shape of tensor is :math:`(N, 5)`. Input bounding boxes.
`N` is the number of input bounding boxes. Every bounding box
@@ -3666,6 +3766,11 @@ class NMSWithMask(PrimitiveWithInfer):
- **selected_mask** (Tensor) - The shape of tensor is :math:`(N,)`. A mask list of
valid output bounding boxes.

Raises:
ValueError: If the `iou_threshold` is not a float number, or if the first dimension
of input Tensor is less than or equal to 0, or if the data type of the input
Tensor is not float16 or float32.

Supported Platforms:
``Ascend`` ``GPU``

@@ -3765,6 +3870,9 @@ class Sign(PrimitiveWithInfer):
Outputs:
Tensor, has the same shape and type as the `input_x`.

Raises:
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``CPU``

@@ -3798,6 +3906,9 @@ class Round(PrimitiveWithInfer):
Outputs:
Tensor, has the same shape and type as the `input_x`.

Raises:
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend``

@@ -3834,7 +3945,8 @@ class Tan(PrimitiveWithInfer):
Tensor, has the same shape as `input_x`.

Raises:
TypeError: If dtype of `input_x` is not one of float16, float32, int32.
TypeError: If dtype of `input_x` is not one of the following: float16, float32, int32.
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``CPU``
@@ -3986,6 +4098,10 @@ class SquareSumAll(PrimitiveWithInfer):
- **output_y1** (Tensor) - The same type as the `input_x1`.
- **output_y2** (Tensor) - The same type as the `input_x1`.

Raises:
TypeError: If neither `input_x1` nor `input_x2` is a Tensor.
ValueError: If `input_x1` and `input_x2` are not the same shape.

Supported Platforms:
``Ascend`` ``GPU``



+ 335
- 20
mindspore/ops/operations/nn_ops.py View File

@@ -90,6 +90,7 @@ class Flatten(PrimitiveWithInfer):
the product of the remaining dimension.

Raises:
TypeError: If `input_x` is not a Tensor.
ValueError: If length of shape of `input_x` is less than 1.

Supported Platforms:
@@ -140,7 +141,7 @@ class Softmax(PrimitiveWithInfer):
Tensor, with the same type and shape as the logits.

Raises:
TypeError: If `axis` is neither an int not a tuple.
TypeError: If `axis` is neither an int nor a tuple.
TypeError: If dtype of `logits` is neither float16 nor float32.
ValueError: If `axis` is a tuple whose length is less than 1.
ValueError: If `axis` is a tuple whose elements are not all in range [-len(logits), len(logits)).
@@ -202,7 +203,7 @@ class LogSoftmax(PrimitiveWithInfer):
Raises:
TypeError: If `axis` is not an int.
TypeError: If dtype of `logits` is neither float16 nor float32.
ValueError: If `axis` is not in range [-len(logits), len(logits)).
ValueError: If `axis` is not in range [-len(logits), len(logits)].

Supported Platforms:
``Ascend`` ``GPU``
@@ -245,6 +246,10 @@ class Softplus(PrimitiveWithInfer):
Outputs:
Tensor, with the same type and shape as the `input_x`.

Raises:
TypeError: If `input_x` is not a Tensor.
TypeError: If dtype of `input_x` is not float.

Supported Platforms:
``Ascend`` ``GPU``

@@ -285,6 +290,10 @@ class Softsign(PrimitiveWithInfer):
Outputs:
Tensor, with the same type and shape as the `input_x`.

Raises:
TypeError: If `input_x` is not a Tensor.
TypeError: If dtype of `input_x` is neither float16 nor float32.

Supported Platforms:
``Ascend``

@@ -322,7 +331,8 @@ class ReLU(PrimitiveWithCheck):
Tensor, with the same type and shape as the `input_x`.

Raises:
TypeError: If dtype of `input_x` is not a number.
TypeError: If dtype of `input_x` is not number.
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@@ -459,6 +469,7 @@ class ReLU6(PrimitiveWithCheck):

Raises:
TypeError: If dtype of `input_x` is neither float16 nor float32.
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@@ -497,6 +508,11 @@ class ReLUV2(PrimitiveWithInfer):
- **output** (Tensor) - Has the same type and shape as the `input_x`.
- **mask** (Tensor) - A tensor whose data type must be uint8.

Raises:
TypeError: If `input_x`, `output` or `mask` is not a Tensor.
TypeError: If dtype of `output` is not same as `input_x.
TypeError: If dtype of `mask` is not unit8.

Supported Platforms:
``Ascend``

@@ -627,6 +643,7 @@ class HSwish(PrimitiveWithInfer):
Tensor, with the same type and shape as the `input_data`.

Raises:
TypeError: If `input_data` is not a Tensor.
TypeError: If dtype of `input_data` is neither float16 nor float32.

Supported Platforms:
@@ -671,6 +688,7 @@ class Sigmoid(PrimitiveWithInfer):

Raises:
TypeError: If dtype of `input_x` is neither float16 nor float32.
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU``
@@ -715,6 +733,7 @@ class HSigmoid(PrimitiveWithInfer):
Tensor, with the same type and shape as the `input_data`.

Raises:
TypeError: If `input_data` is not a Tensor.
TypeError: If dtype of `input_data` is neither float16 nor float32.

Supported Platforms:
@@ -759,6 +778,7 @@ class Tanh(PrimitiveWithInfer):

Raises:
TypeError: If dtype of `input_x` is neither float16 nor float32.
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@@ -822,6 +842,12 @@ class FusedBatchNorm(Primitive):
- **updated_moving_mean** (Tensor) - Tensor of shape :math:`(C,)`.
- **updated_moving_variance** (Tensor) - Tensor of shape :math:`(C,)`.

Raises:
TypeError: If `mode` is not an int.
TypeError: If `epsilon` or `momentum` is not a float.
TypeError: If `output_x`, `updated_scale`, `updated_bias`, `updated_moving_mean` or
`updated_moving_variance` is a Tensor.

Supported Platforms:
``CPU``

@@ -920,6 +946,13 @@ class FusedBatchNormEx(PrimitiveWithCheck):
data type: float32.
- **reserve** (Tensor) - reserve space, Tensor of shape :math:`(C,)`, data type: float32.

Raises:
TypeError: If `mode` is not an int.
TypeError: If neither `epsilon` nor `momentum` is a float.
TypeError: If `data_format` is not a str.
TypeError: If `input_x` is not a Tensor.
TypeError: If dtype of `scale`, `bias`, `mean` or `variance` is not float32.

Supported Platforms:
``GPU``

@@ -1114,6 +1147,10 @@ class BNTrainingReduce(PrimitiveWithInfer):
- **sum** (Tensor) - A 1-D Tensor with float32 data type. Tensor of shape :math:`(C,)`.
- **square_sum** (Tensor) - A 1-D Tensor with float32 data type. Tensor of shape :math:`(C,)`.

Raises:
TypeError: If `x`, `sum` or `square_sum` is not a Tensor.
TypeError: If dtype of `square_sum` is neither float16 nor float32.

Supported Platforms:
``Ascend``

@@ -1175,6 +1212,13 @@ class BNTrainingUpdate(PrimitiveWithInfer):
- **batch_variance** (Tensor) - Tensor for the mean of `variance`, with float32 data type.
Has the same shape as `variance`.

Raises:
TypeError: If `isRef` is not a bool.
TypeError: If dtype of `epsilon` or `factor` is not float.
TypeError: If `x`, `sum`, `square_sum`, `scale`, `offset`, `mean` or `variance` is not a Tensor.
TypeError: If dtype of `x`, `sum`, `square_sum`, `scale`, `offset`, `mean` or `variance` is neither float16 nor
float32.

Supported Platforms:
``Ascend``

@@ -1290,6 +1334,13 @@ class BatchNorm(PrimitiveWithInfer):
- **reserve_space_1** (Tensor) - Tensor of shape :math:`(C,)`.
- **reserve_space_2** (Tensor) - Tensor of shape :math:`(C,)`.

Raises:
TypeError: If `is_training` is not a bool.
TypeError: If dtype of `epsilon` or `momentum` is not float.
TypeError: If `data_format` is not a str.
TypeError: If `input_x`, `scale`, `bias`, `mean` or `variance` is not a Tensor.
TypeError: If dtype of `input_x`, `scale` or `mean` is neither float16 nor float32.

Supported Platforms:
``Ascend``

@@ -1414,7 +1465,7 @@ class Conv2D(PrimitiveWithCheck):
Tensor, the value that applied 2D convolution. The shape is :math:`(N, C_{out}, H_{out}, W_{out})`.

Raises:
TypeError: If `kernel_size`, `stride`, `pad` or `dilation` is neither an int not a tuple.
TypeError: If `kernel_size`, `stride`, `pad` or `dilation` is neither an int nor a tuple.
TypeError: If `out_channel` or `group` is not an int.
ValueError: If `kernel_size`, `stride` or `dilation` is less than 1.
ValueError: If `pad_mode` is not one of 'same', 'valid', 'pad'.
@@ -1521,6 +1572,13 @@ class DepthwiseConv2dNative(PrimitiveWithInfer):
Outputs:
Tensor of shape :math:`(N, C_{in} * \text{channel_multiplier}, H_{out}, W_{out})`.

Raises:
TypeError: If `kernel_size`, `stride`, `pad` or `dilation` is neither an int nor a tuple.
TypeError: If `channel_multiplier` or `group` is not an int.
ValueError: If `stride` or `dilation` is less than 1.
ValueError: If `pad_mode` is not one of the following:'same', 'valid' or 'pad'.
ValueError: If `pad_mode` it not equal to 'pad' and `pad` is not equal to (0, 0, 0, 0).

Supported Platforms:
``Ascend``

@@ -1811,6 +1869,8 @@ class MaxPoolWithArgmax(_Pool):

Raises:
TypeError: If the input data type is not float16 or float32.
TypeError: If `kernel_size` or `strides` is neither an int nor a tuple.
TypeError: If `input` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU``
@@ -2054,7 +2114,7 @@ class Conv2DBackpropInput(PrimitiveWithInfer):
Tensor, the gradients w.r.t the input of convolution. It has the same shape as the input.

Raises:
TypeError: If `kernel_size`, `stride`, `pad` or `dilation` is neither an int not a tuple.
TypeError: If `kernel_size`, `stride`, `pad` or `dilation` is neither an int nor a tuple.
TypeError: If `out_channel` or `group` is not an int.
ValueError: If `kernel_size`, `stride` or `dilation` is less than 1.
ValueError: If `pad_mode` is not one of 'same', 'valid', 'pad'.
@@ -2190,6 +2250,9 @@ class BiasAdd(PrimitiveWithCheck):
Outputs:
Tensor, with the same shape and type as `input_x`.

Raises:
TypeError: If `data_format`, `input_x` or `bias` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -2242,6 +2305,12 @@ class TopK(PrimitiveWithInfer):
- **values** (Tensor) - The `k` largest elements in each slice of the last dimensional.
- **indices** (Tensor) - The indices of values within the last dimension of input.

Raises:
TypeError: If `sorted` is not a bool.
TypeError: If `input_x` is not a Tensor.
TypeError: If `k` is not an int.
TypeError: If dtype of `input_x` is not one of the following: float16, float32 or int32.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -2391,6 +2460,7 @@ class SoftmaxCrossEntropyWithLogits(PrimitiveWithInfer):

Raises:
TypeError: If dtype of `logits` or `labels` is neither float16 nor float32.
TypeError: If `logits` or `labels` is not a Tensor.
ValueError: If shape of `logits` is not the same as `labels`.

Supported Platforms:
@@ -2587,7 +2657,8 @@ class SmoothL1Loss(PrimitiveWithInfer):

Raises:
TypeError: If `beta` is not a float.
TypeError: If dtype of `prediction` or `target` is neither float16 not float32.
TypeError: If `prediction` or `target` is not a Tensor.
TypeError: If dtype of `prediction` or `target` is neither float16 nor float32.
ValueError: If `beta` is less than or equal to 0.
ValueError: If shape of `prediction` is not the same as `target`.

@@ -2634,6 +2705,10 @@ class L2Loss(PrimitiveWithInfer):
Outputs:
Tensor, has the same dtype as `input_x`. The output tensor is the value of loss which is a scalar tensor.

Raises:
TypeError: If `input_x` not a Tensor.
TypeError: If dtype of `input_x` is neither float16 nor float32.

Supported Platforms:
``Ascend`` ``GPU``

@@ -2674,6 +2749,10 @@ class DataFormatDimMap(PrimitiveWithInfer):
Outputs:
Tensor, has the same type as the `input_x`.

Raises:
TypeError: If `src_format` or `dst_format` is not a str.
TypeError: If `input_x` is not a Tensor whose dtype is not int32.

Supported Platforms:
``Ascend``

@@ -2718,6 +2797,11 @@ class RNNTLoss(PrimitiveWithInfer):
- **costs** (Tensor[int32]) - Tensor of shape :math:`(B,)`.
- **grads** (Tensor[int32]) - Has the same shape as `acts`.

Raises:
TypeError: If `acts`, `labels`, `input_lengths` or `label_lengths` is not a Tensor.
TypeError: If dtype of `acts` is neither float16 nor float32.
TypeError: If dtype of `labels`, `input_lengths` or `label_lengths` is not int32.

Supported Platforms:
``Ascend``

@@ -2791,6 +2875,13 @@ class SGD(PrimitiveWithCheck):
Outputs:
Tensor, parameters to be updated.

Raises:
TypeError: If `dampening` or `weight_decay` is not a float.
TypeError: If `nesterov` is not a bool.
TypeError: If `parameters`, `gradient`, `learning_rate`, `accum`, `momentum` or `stat` is not a Tensor.
TypeError: If dtype of `parameters`, `gradient`, `learning_rate`, `accum`, `momentum` or `stat` is neither
float16 nor float32.

Supported Platforms:
``Ascend`` ``GPU``

@@ -2876,6 +2967,14 @@ class ApplyRMSProp(PrimitiveWithInfer):
Outputs:
Tensor, parameters to be update.

Raises:
TypeError: If `use_locking` is not a bool.
TypeError: If `var, `mean_square`, `moment` or `decay` is not a Tensor.
TypeError: If `learning_rate` is neither a Number nor a Tensor.
TypeError: If dtype of `decay`, `momentum` or `epsilon` is not float.
TypeError: If dtype of `learning_rate` is neither float16 nor float32.
ValueError: If `decay`, `momentum` or `epsilon` is not a constant value.

Supported Platforms:
``Ascend`` ``GPU``

@@ -2972,6 +3071,13 @@ class ApplyCenteredRMSProp(PrimitiveWithInfer):
Outputs:
Tensor, parameters to be update.

Raises:
TypeError: If `use_locking` is not a bool.
TypeError: If `var`, `mean_gradient`, `mean_square`, `moment` or `grad` is not a Tensor.
TypeError: If `learing_rate` is neither a Number nor a Tensor.
TypeError: If dtype of `learing_rate` is neither float16 nor float32.
TypeError: If dtype of `decay`, `momentum` or `epsilon` is not float.

Supported Platforms:
``Ascend`` ``GPU``

@@ -3059,6 +3165,7 @@ class LayerNorm(Primitive):
Raises:
TypeError: If `begin_norm_axis` or `begin_params_axis` is not an int.
TypeError: If `epsilon` is not a float.
TypeError: If `input_x`, `gamma` or `beta` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU``
@@ -3109,6 +3216,12 @@ class L2Normalize(PrimitiveWithInfer):
Outputs:
Tensor, with the same type and shape as the input.

Raises:
TypeError: If `axis` is not one of the following: list, tuple or int.
TypeError: If `epsilon` is not a float.
TypeError: If `input_x` is not a Tensor.
TypeError: If dtype of `input_x` is neither float16 nor float32.

Supported Platforms:
``Ascend`` ``GPU``

@@ -3157,6 +3270,11 @@ class DropoutGenMask(Primitive):
Outputs:
Tensor, the value of generated mask for input shape.

Raises:
TypeError: If neither `seed0` nor `seed1` is an int.
TypeError: If `shape` is not a tuple.
TypeError: If `keep_prob` is not a Tensor.

Supported Platforms:
``Ascend``

@@ -3196,6 +3314,11 @@ class DropoutDoMask(PrimitiveWithInfer):
Outputs:
Tensor, the value that applied dropout on.

Raises:
TypeError: If `input_x`, `mask` or `keep_prob` is not a Tensor.
TypeError: If `keep_prob` is not a float.
ValueError: If value of `keep_prob` is not same as `DropoutGenMaks`.

Supported Platforms:
``Ascend``

@@ -3271,6 +3394,7 @@ class ResizeBilinear(PrimitiveWithInfer):
TypeError: If `size` is neither a tuple nor list.
TypeError: If `align_corners` is not a bool.
TypeError: If dtype of `input` is neither float16 nor float32.
TypeError: If `input` is not a Tensor.
ValueError: If length of shape of `input` is not equal to 4.

Supported Platforms:
@@ -3333,15 +3457,16 @@ class OneHot(PrimitiveWithInfer):
Outputs:
Tensor, one-hot tensor. Tensor of shape :math:`(X_0, \ldots, X_{axis}, \text{depth} ,X_{axis+1}, \ldots, X_n)`.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

Raises:
TypeError: If `axis` or `depth` is not an int.
TypeError: If dtype of `indices` is neither int32 nor int64.
TypeError: If `indices`, `on_value` or `off_value` is not a Tensor.
ValueError: If `axis` is not in range [-1, len(indices_shape)].
ValueError: If `depth` is less than 0.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

Examples:
>>> indices = Tensor(np.array([0, 1, 2]), mindspore.int32)
>>> depth, on_value, off_value = 3, Tensor(1.0, mindspore.float32), Tensor(0.0, mindspore.float32)
@@ -3419,6 +3544,7 @@ class GeLU(PrimitiveWithInfer):
Tensor, with the same type and shape as input.

Raises:
TypeError: If `input_x` is not a Tensor.
TypeError: If dtype of `input_x` is neither float16 nor float32.

Supported Platforms:
@@ -3590,6 +3716,7 @@ class PReLU(PrimitiveWithInfer):

Raises:
TypeError: If dtype of `input_x` or `weight` is neither float16 nor float32.
TypeError: If `input_x` or `weight` is not a Tensor.
ValueError: If length of shape of `input_x` is equal to 1.
ValueError: If length of shape of `weight` is not equal to 1.

@@ -3804,6 +3931,9 @@ class SigmoidCrossEntropyWithLogits(PrimitiveWithInfer):
Outputs:
Tensor, with the same shape and type as input `logits`.

Raises:
TypeError: If `logits` or `label` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -3943,6 +4073,7 @@ class Pad(PrimitiveWithInfer):

Raises:
TypeError: If `paddings` is not a tuple.
TypeError: If `input_x` is not a Tensor.
ValueError: If shape of `paddings` is not (n, 2).

Supported Platforms:
@@ -4013,6 +4144,10 @@ class MirrorPad(PrimitiveWithInfer):
is [[1,2,3], [4,5,6], [7,8,9]] and `paddings` is [[1,1], [2,2]], then the Outputs is
[[2,1,1,2,3,3,2], [2,1,1,2,3,3,2], [5,4,4,5,6,6,5], [8,7,7,8,9,9,8], [8,7,7,8,9,9,8]].

Raises:
TypeError: If `input_x` or `paddings` is not a Tensor.
TypeError: If `mode` is not a str.

Supported Platforms:
``Ascend`` ``GPU``

@@ -4091,6 +4226,11 @@ class ComputeAccidentalHits(PrimitiveWithCheck):
- **ids** (Tensor) - A Tensor with shape (num_accidental_hits,), with the same type as `true_classes`.
- **weights** (Tensor) - A Tensor with shape (num_accidental_hits,), with the type float32.

Raises:
TypeError: If dtype of `num_true` is not int.
TypeError: If `true_classes` or `sampled_candidates` is not a Tensor.
TypeError: If dtype of `true_classes` or `sampled_candidates` is neither int32 nor int64.

Supported Platforms:
``Ascend``

@@ -4162,6 +4302,11 @@ class ROIAlign(PrimitiveWithInfer):
Outputs:
Tensor, the shape is `(rois_n, C, pooled_height, pooled_width)`.

Raises:
TypeError: If `pooled_height`, `pooled_width`, `sample_num` or `roi_end_mode` is not an int.
TypeError: If `spatial_scale` is not a float.
TypeError: If `features` or `rois` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU``

@@ -4252,8 +4397,9 @@ class Adam(PrimitiveWithInfer):
- **v** (Tensor) - The same shape and data type as `v`.

Raises:
TypeError: If `use_locking` or `use_nesterov` is not a bool.
ValueError: If shape of `var`, `m` and `v` is not the same.
TypeError: If neither `use_locking` nor `use_nesterov` is a bool.
TypeError: If `var`, `m` or `v` is not a Tensor.
TypeError: If `beta1_power`, `beta2_power1, `lr`, `beta1`, `beta2`, `epsilon` or `gradient` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU``
@@ -4361,6 +4507,11 @@ class AdamNoUpdateParam(PrimitiveWithInfer):
Tensor, whose shape and data type are the same with `gradient`, is a value that should be added to the
parameter to be updated.

Raises:
TypeError: If neither `use_locking` nor `use_nesterov` is a bool.
TypeError: If `m`, `v`, `beta1_power`, `beta2_power1, `lr`,
`beta1`, `beta2`, `epsilon` or `gradient` is a Tensor.

Supported Platforms:
``CPU``

@@ -4476,6 +4627,11 @@ class FusedSparseAdam(PrimitiveWithInfer):
- **m** (Tensor) - A Tensor with shape (1,).
- **v** (Tensor) - A Tensor with shape (1,).

Raises:
TypeError: If neither `use_locking` nor `use_neserov` is a bool.
TypeError: If dtype of `var`, `m`, `v`, `beta1_power`, `beta2_power`, `lr`, `beta1`, `beta2`, `epsilon`,
`gradient` or `indices` is not float32.

Supported Platforms:
``CPU``

@@ -4615,6 +4771,12 @@ class FusedSparseLazyAdam(PrimitiveWithInfer):
- **m** (Tensor) - A Tensor with shape (1,).
- **v** (Tensor) - A Tensor with shape (1,).

Raises:
TypeError: If neither `use_locking` nor `use_nestrov` is a bool.
TypeError: If dtype of `var`, `m`, `v`, `beta1_power`, `beta2_power`, `lr`, `beta1`, `beta2`, `epsilon` or
gradient is not float32.
TypeError: If dtype of `indices` is not int32.

Supported Platforms:
``CPU``

@@ -4728,6 +4890,14 @@ class FusedSparseFtrl(PrimitiveWithInfer):
- **accum** (Tensor) - A Tensor with shape (1,).
- **linear** (Tensor) - A Tensor with shape (1,).

Raises:
TypeError: If `lr`, `l1`, `l2` or `lr_power` is not a float.
ValueError: If shape of `lr_power` less than or equal to zero.
TypeError: If dtype of `var` is not float32.
TypeError: If dtype of `indices` is not int32.
TypeError: If shape of `accum`, `linear` or `grad` is not same as `var`.
TypeError: If shape of `indices` is not same as shape of first dimension of `grad`.

Supported Platforms:
``Ascend`` ``CPU``

@@ -4837,6 +5007,11 @@ class FusedSparseProximalAdagrad(PrimitiveWithInfer):
- **var** (Tensor) - A Tensor with shape (1,).
- **accum** (Tensor) - A Tensor with shape (1,).

Raises:
TypeError: If `use_locking` is not a bool.
TypeError: If dtype of `var`, `accum`, `lr`, `l1`, `l2` or `grad` is not float32.
TypeError: If dtype of `indices` is not int32.

Supported Platforms:
``CPU``

@@ -4936,6 +5111,11 @@ class KLDivLoss(PrimitiveWithInfer):
Tensor or Scalar, if `reduction` is 'none', then output is a tensor and has the same shape as `input_x`.
Otherwise it is a scalar.

Raises:
TypeError: If `reduction` is not a str.
TypeError: If neither `input_x` nor `input_y` is a Tensor.
TypeError: If dtype of `input_x` or `input_y` is not float32.

Supported Platforms:
``GPU``

@@ -5018,6 +5198,7 @@ class BinaryCrossEntropy(PrimitiveWithInfer):
TypeError: If dtype of `input_x`, `input_y` or `weight` (if given) is neither float16 not float32.
ValueError: If `reduction` is not one of 'none', 'mean', 'sum'.
ValueError: If shape of `input_y` is not the same as `input_x` or `weight` (if given).
TypeError: If `input_x`, `input_y` or `weight` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@@ -5120,6 +5301,12 @@ class ApplyAdaMax(PrimitiveWithInfer):
- **m** (Tensor) - The same shape and data type as `m`.
- **v** (Tensor) - The same shape and data type as `v`.

Raises:
TypeError: If dtype of `var`, `m`, `v`, `beta_power`, `lr`, `beta1`,
`beta2`, `epsilon` or `grad` is neither float16 nor float32.
TypeError: If `beta_power`, `lr`, `beta1`, `beta2` or `epsilon` is neither a Number nor a Tensor.
TypeError: If `grad` is not a Tensor.

Supported Platforms:
``Ascend``

@@ -5254,6 +5441,11 @@ class ApplyAdadelta(PrimitiveWithInfer):
- **accum** (Tensor) - The same shape and data type as `accum`.
- **accum_update** (Tensor) - The same shape and data type as `accum_update`.

Raises:
TypeError: If dtype of `var`, `accum`, `accum_update`,
`lr`, `rho`, `epsilon` or `grad` is neither float16 nor float32.
TypeError: If `accum_update`, `lr`, `rho` or `epsilon` is neither a Number nor a Tensor.

Supported Platforms:
``Ascend``

@@ -5368,6 +5560,10 @@ class ApplyAdagrad(PrimitiveWithInfer):
- **var** (Tensor) - The same shape and data type as `var`.
- **accum** (Tensor) - The same shape and data type as `accum`.

Raises:
TypeError: If dtype of `var`, `accum`, `lr` or `grad` is neither float16 nor float32.
TypeError: If `lr` is neither a Number nor a Tensor.

Supported Platforms:
``Ascend`` ``CPU`` ``GPU``

@@ -5463,6 +5659,10 @@ class ApplyAdagradV2(PrimitiveWithInfer):
- **var** (Tensor) - The same shape and data type as `var`.
- **accum** (Tensor) - The same shape and data type as `m`.

Raises:
TypeError: If dtype of `var`, `accum`, `lr` or `grad` is neither float16 nor float32.
TypeError: If `lr` is neither a Number nor a Tensor.

Supported Platforms:
``Ascend``

@@ -5559,6 +5759,13 @@ class SparseApplyAdagrad(PrimitiveWithInfer):
- **var** (Tensor) - The same shape and data type as `var`.
- **accum** (Tensor) - The same shape and data type as `accum`.

Raises:
TypeError: If `lr` is not a float.
TypeError: If neither `update_slots` nor `use_locking` is a bool.
TypeError: If dtype of `var`, `accum` or `grad` is neither float16 nor float32.
TypeError: If dtype of `indices` is not int32.


Supported Platforms:
``Ascend``

@@ -5656,6 +5863,12 @@ class SparseApplyAdagradV2(PrimitiveWithInfer):
- **var** (Tensor) - The same shape and data type as `var`.
- **accum** (Tensor) - The same shape and data type as `accum`.

Raises:
TypeError: If neither `lr` nor `epsilon` is a float.
TypeError: If neither `update_slots` nor `use_locking` is a bool.
TypeError: If dtype of `var`, `accum` or `grad` is neither float16 nor float32.
TypeError: If dtype of `indices` is not int32.

Supported Platforms:
``Ascend``

@@ -5756,6 +5969,12 @@ class ApplyProximalAdagrad(PrimitiveWithInfer):
- **var** (Tensor) - The same shape and data type as `var`.
- **accum** (Tensor) - The same shape and data type as `accum`.

Raises:
TypeError: If `use_blocking` is not a bool.
TypeError: If dtype of `var`, `lr`, `l1` or `l2` is neither float16 nor float32.
TypeError: If `lr`, `l1` or `l2` is neither a Number nor a Tensor.
TypeError: If `grad` is not a Tensor.

Supported Platforms:
``Ascend``

@@ -5874,6 +6093,12 @@ class SparseApplyProximalAdagrad(PrimitiveWithCheck):
- **var** (Tensor) - The same shape and data type as `var`.
- **accum** (Tensor) - The same shape and data type as `accum`.

Raises:
TypeError: If `use_locking` is not a bool.
TypeError: If dtype of `var`, `accum`, `lr`, `l1`, `l2`, `scalar` or `grad` is neither float16
nor float32.
TypeError: If dtype of `indices` is neither int32 nor int64.

Supported Platforms:
``Ascend`` ``GPU``

@@ -5977,6 +6202,11 @@ class ApplyAddSign(PrimitiveWithInfer):
- **var** (Tensor) - The same shape and data type as `var`.
- **m** (Tensor) - The same shape and data type as `m`.

Raises:
TypeError: If dtype of `var`, `lr`, `alpha`, `sign_decay` or `beta` is neither float16 nor float32.
TypeError: If `lr`, `alpha` or `sign_decay` is neither a Number nor a Tensor.
TypeError: If `grad` is not a Tensor.

Supported Platforms:
``Ascend``

@@ -6099,6 +6329,11 @@ class ApplyPowerSign(PrimitiveWithInfer):
- **var** (Tensor) - The same shape and data type as `var`.
- **m** (Tensor) - The same shape and data type as `m`.

Raises:
TypeError: If dtype of `var`, `lr`, `logbase`, `sign_decay`, `beta` or `grad` is neither float16 nor float32.
TypeError: If `lr`, `logbase`, `sign_decay` or `beta` is neither a Number nor a Tensor.
TypeError: If `grad` is not a Tensor.

Supported Platforms:
``Ascend``

@@ -6203,6 +6438,11 @@ class ApplyGradientDescent(PrimitiveWithInfer):
Outputs:
Tensor, represents the updated `var`.

Raises:
TypeError: If dtype of `var` or `alpha` is neither float16 nor float32.
TypeError: If `delta` is not a Tensor.
TypeError: If `alpha` is neither a Number nor a Tensor.

Supported Platforms:
``Ascend``

@@ -6281,6 +6521,11 @@ class ApplyProximalGradientDescent(PrimitiveWithInfer):
Outputs:
Tensor, represents the updated `var`.

Raises:
TypeError: If dtype of `var`, `alpha`, `l1` or `l2` is neither float16 nor float32.
TypeError: If `alpha`, `l1` or `l2` is neither a Number nor a Tensor.
TypeError: If `delta` is not a Tensor.

Supported Platforms:
``Ascend``

@@ -6367,6 +6612,13 @@ class LARSUpdate(PrimitiveWithInfer):
Outputs:
Tensor, represents the new gradient.

Raises:
TypeError: If neither `epsilon` nor `hyperpara` is a float.
TypeError: If `use_clip` is a bool.
TypeError: If `weight`, `gradient`, `norm_weight` or `norm_gradient` is not a Tensor.
TypeError: If `weight_decay` or `learning_rate` is neither a Number nor a Tensor.
TypeError: If shape of `gradient` is not same as `weight`.

Supported Platforms:
``Ascend``

@@ -6459,6 +6711,12 @@ class ApplyFtrl(PrimitiveWithInfer):
- **var** (Tensor) - represents the updated `var`. As the input parameters has been updated in-place, this
value is always zero when the platforms is GPU.

Raises:
TypeError: If `use_locking` is not a bool.
TypeError: If dtype of `var`, `grad`, `lr`, `l1`, `l2` or `lr_power` is neither float16 nor float32.
TypeError: If `lr`, `l1`, `l2` or `lr_power` is neither a Number nor a Tensor.
TypeError: If `grad` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU``

@@ -6553,6 +6811,13 @@ class SparseApplyFtrl(PrimitiveWithCheck):
- **accum** (Tensor) - Tensor, has the same shape and data type as `accum`.
- **linear** (Tensor) - Tensor, has the same shape and data type as `linear`.

Raises:
TypeError: If `lr`, `l1`, `l2` or `lr_power` is not a float.
TypeError: If `use_locking` is not a bool.
TypeError: If dtype of `var`, `accum`, `linear` or `grad` is neither float16 nor float32.
TypeError: If dtype of `indices` is neither int32 nor int64.


Supported Platforms:
``Ascend`` ``GPU``

@@ -6660,6 +6925,12 @@ class SparseApplyFtrlV2(PrimitiveWithInfer):
- **accum** (Tensor) - Tensor, has the same shape and data type as `accum`.
- **linear** (Tensor) - Tensor, has the same shape and data type as `linear`.

Raises:
TypeError: If `lr`, `l1`, `l2`, `lr_power` or`use_locking` is not a float.
TypeError: If `use_locking` is not a bool.
TypeError: If dtype of `var`, `accum`, `linear` or `grad` is neither float16 nor float32.
TypeError: If dtype of `indices` is not int32.

Supported Platforms:
``Ascend``

@@ -6754,9 +7025,8 @@ class Dropout(PrimitiveWithCheck):
Raises:
TypeError: If `keep_prob` is not a float.
TypeError: If `Seed0` or `Seed1` is not an int.
TypeError: If dtype of `input` is not neither float16 nor float32.
ValueError: If `keep_prob` is not in range (0, 1].
ValueError: If length of shape of `input` is less than 1.
TypeError: If dtype of `input` is neither float16 nor float32.
TypeError: If `input` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@@ -6867,6 +7137,14 @@ class CTCLoss(PrimitiveWithInfer):
the same type with `inputs`.
- **gradient** (Tensor) - The gradient of `loss`, has the same type and shape with `inputs`.

Raises:
TypeError: If `preprocess_collapse_repeated`, `ctc_merge_repeated` or `ignore_longer_outputs_than_inputs` is
not a bool.
TypeError: If `inputs`, `labels_indices`, `labels_values` or `sequence_length` is not a Tensor.
TypeError: If dtype of `inputs` is not one of the following: float16, float32 or float64.
TypeError: If dtype of `labels_indices` is not int64.
TypeError: If dtype of `labels_values` or `sequence_length` is not int32.

Supported Platforms:
``Ascend`` ``GPU``

@@ -7048,6 +7326,13 @@ class BasicLSTMCell(PrimitiveWithInfer):
- **tanhct** (Tensor) - Forward :math:`tanh c_t` cache at moment `t`.
Tensor of shape (`batch_size`, `hidden_size`), has the same type with input `c`.

Raises:
TypeError: If dtype of `keep_prob` or `forget_bias` is not float.
TypeError: If `state_is_tuple` is not a bool.
TypeError: If `activation` is not a str.
TypeError: If `x`, `h`, `c`, `w` or `b` is not a Tensor.
TypeError: If dtype of `x`, `h`, `c` or `w` is neither float16 nor float32.

Supported Platforms:
``Ascend``

@@ -7180,6 +7465,15 @@ class DynamicRNN(PrimitiveWithInfer):
- **tanhct** (Tensor) - A Tensor of shape (`num_step`, `batch_size`, `hidden_size`).
Has the same type with input `b`.

Raises:
TypeError: If `cell_type`, `direction` or `activation` is not a str.
TypeError: If `cell_depth` or `num_proj` is not an int.
TypeError: If `keep_prob`, `cell_clip` or `forget_bias` is not a float.
TypeError: If `use_peehpole`, `time_major` or `is_training` is not a bool.
TypeError: If `x`, `w`, `b`, `seq_length`, `init_h` or `init_c` is not a Tensor.
TypeError: If dtype of `x`, `w`, `init_h` or `nit_c` is not float16.
TypeError: If dtype of `b` is neither float16 nor float32.

Supported Platforms:
``Ascend``

@@ -7332,6 +7626,16 @@ class DynamicGRUV2(PrimitiveWithInfer):
- If `bias_input` is not `None`, `bias_type` is the date type of `bias_input`.
- If `bias_input` is `None` and `bias_hidden` is not `None, `bias_type` is the date type of `bias_hidden`.

Raises:
TypeError: If `direction`, `activation` or `gate_order` is not a str.
TypeError: If `cell_depth` or `num_proj` is not an int.
TypeError: If `keep_prob` or `cell_clip` is not a float.
TypeError: If `time_major`, `reset_after` or `is_training` is not a bool.
TypeError: If `x`, `weight_input`, `weight_hidden`, `bias_input`, `bias_hidden`, `seq_length` or `ini_h` is
not a Tensor.
TypeError: If dtype of `x`, `weight_input` or `weight_hidden` is not float16.
TypeError: If dtype of `init_h` is neither float16 nor float32.

Supported Platforms:
``Ascend``

@@ -7446,6 +7750,11 @@ class InTopK(PrimitiveWithInfer):
Tensor has 1 dimension of type bool and the same shape with `x2`. For labeling sample `i` in `x2`,
if the label in the first `k` predictions for sample `i` is in `x1`, then the value is True, otherwise False.

Raises:
TypeError: If `k` is not an int.
TypeError: If `x1` or `x2` is not a Tensor.
TypeError: If dtype of `x1` is neither float16 nor float32.

Supported Platforms:
``Ascend``

@@ -7499,6 +7808,12 @@ class LRN(PrimitiveWithInfer):
Outputs:
Tensor, with the same shape and data type as the input tensor.

Raises:
TypeError: If `depth_radius` is not an int.
TypeError: If `bias`, `alpha` or `beta` is not a float.
TypeError: If `norm_region` is not a str.
TypeError: If `x` is not a Tensor.

Supported Platforms:
``Ascend``

@@ -7607,9 +7922,6 @@ class Conv3D(PrimitiveWithInfer):
Outputs:
Tensor, the value that applied 3D convolution. The shape is :math:`(N, C_{out}, D_{out}, H_{out}, W_{out})`.

Supported Platforms:
``Ascend``

Raises:
TypeError: If `out_channel` or `group` is not an int.
TypeError: If `kernel_size`, `stride`, `pad` or `dilation` is neither an int not a tuple.
@@ -7620,6 +7932,9 @@ class Conv3D(PrimitiveWithInfer):
ValueError: If `pad_mode` is not equal to 'pad' and `pad` is not equal to (0, 0, 0, 0, 0, 0).
ValueError: If `data_format` is not 'NCDHW'.

Supported Platforms:
``Ascend``

Examples:
>>> input = Tensor(np.ones([16, 3, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([32, 3, 4, 3, 3]), mindspore.float16)
@@ -7787,9 +8102,6 @@ class Conv3DBackpropInput(PrimitiveWithInfer):
Outputs:
Tensor, the gradients w.r.t the input of convolution 3D. It has the same shape as the input.

Supported Platforms:
``Ascend``

Raises:
TypeError: If `out_channel` or `group` is not an int.
TypeError: If `kernel_size`, `stride`, `pad` or `dilation` is neither an int not a tuple.
@@ -7800,6 +8112,9 @@ class Conv3DBackpropInput(PrimitiveWithInfer):
ValueError: If `pad_mode` is not equal to 'pad' and `pad` is not equal to (0, 0, 0, 0, 0, 0).
ValueError: If `data_format` is not 'NCDHW'.

Supported Platforms:
``Ascend``

Examples:
>>> dout = Tensor(np.ones([16, 32, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([32, 32, 4, 6, 2]), mindspore.float16)


+ 27
- 3
mindspore/ops/operations/other_ops.py View File

@@ -79,12 +79,18 @@ class InplaceAssign(PrimitiveWithInfer):
"""
Inplace assign `Parameter` with a value.
This primitive can only use in graph kernel.

Inputs:
- **variable** (Parameter) - The `Parameter`.
- **value** (Tensor) - The value to be assigned.
- **depend** (Tensor) - The dependent tensor to keep this op connected in graph.

Outputs:
Tensor, has the same type as original `variable`.

Raises:
TypeError: If `value` or `depend` is not a Tensor.

Examples:
>>> class Net(nn.Cell):
... def __init__(self):
@@ -149,6 +155,10 @@ class BoundingBoxEncode(PrimitiveWithInfer):
Outputs:
Tensor, encoded bounding boxes.

Raises:
TypeError: If `means` or `stds` is not a tuple.
TypeError: If `anchor_box` or `groundtruth_box` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU``

@@ -205,6 +215,11 @@ class BoundingBoxDecode(PrimitiveWithInfer):
Outputs:
Tensor, decoded boxes.

Raises:
TypeError: If `means`, `stds` or `max_shape` is not a tuple.
TypeError: If `wh_ratio_clip` is not a float.
TypeError: If `anchor_box` or `deltas` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU``

@@ -263,6 +278,10 @@ class CheckValid(PrimitiveWithInfer):
Outputs:
Tensor, with shape of (N,) and dtype of bool.

Raises:
TypeError: If `bboxes` or `img_metas` is not a Tensor.
TypeError: If dtype of `bboxes` or `img_metas` is neither float16 nor float32.

Supported Platforms:
``Ascend`` ``GPU``

@@ -480,9 +499,6 @@ class CheckBprop(PrimitiveWithInfer):
"""
Checks whether the data type and the shape of corresponding elements from tuples x and y are the same.

Raises:
TypeError: If tuples x and y are not the same.

Inputs:
- **input_x** (tuple[Tensor]) - The `input_x` contains the outputs of bprop to be checked.
- **input_y** (tuple[Tensor]) - The `input_y` contains the inputs of bprop to check against.
@@ -491,6 +507,9 @@ class CheckBprop(PrimitiveWithInfer):
(tuple[Tensor]), the `input_x`,
if data type and shape of corresponding elements from `input_x` and `input_y` are the same.

Raises:
TypeError: If `input_x` or `input_y` is not a Tensor.

Examples:
>>> input_x = (Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32),)
>>> input_y = (Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32),)
@@ -561,6 +580,11 @@ class ConfusionMatrix(PrimitiveWithInfer):
Outputs:
Tensor, the confusion matrix, with shape (`num_classes`, `num_classes`).

Raises:
TypeError: If `num_classes` is not an int.
TypeError: If `dtype` is not a str.
TypeError: If `labels`, `predictions` or weight` is not a Tensor.

Examples:
>>> confusion_matrix = ops.ConfusionMatrix(4)
>>> labels = Tensor([0, 1, 1, 3], mindspore.int32)


+ 58
- 0
mindspore/ops/operations/random_ops.py View File

@@ -34,6 +34,11 @@ class StandardNormal(PrimitiveWithInfer):
Outputs:
Tensor. The shape is the same as the input `shape`. The dtype is float32.

Raises:
TypeError: If neither `seed` nor `seed2` is an int.
TypeError: If `shape` is not a tuple.
ValueError: If `shape` is not a constant value.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

@@ -86,6 +91,11 @@ class StandardLaplace(PrimitiveWithInfer):
Outputs:
Tensor. The shape that the input 'shape' denotes. The dtype is float32.

Raises:
TypeError: If neither `seed` nor `seed2` is an int.
TypeError: If `shape` is not a tuple.
ValueError: If `shape` is not a constant value.

Supported Platforms:
``Ascend``

@@ -142,6 +152,11 @@ class Gamma(PrimitiveWithInfer):
Tensor. The shape must be the broadcasted shape of Input "shape" and shapes of alpha and beta.
The dtype is float32.

Raises:
TypeError: If neither `seed` nor `seed2` is an int.
TypeError: If neither `alpha` nor `beta` is a Tensor.
ValueError: If `shape` is not a constant value.

Supported Platforms:
``Ascend``

@@ -202,6 +217,11 @@ class Poisson(PrimitiveWithInfer):
Tensor. Its shape must be the broadcasted shape of `shape` and the shape of `mean`.
The dtype is int32.

Raises:
TypeError: If neither `seed` nor `seed2` is an int.
TypeError: If `shape` is not a tuple.
TypeError: If `mean` is not a Tensor whose dtype is not float32.

Supported Platforms:
``Ascend``

@@ -261,6 +281,12 @@ class UniformInt(PrimitiveWithInfer):
- **maxval** (Tensor) - The distribution parameter, b.
It defines the maximum possibly generated value, with int32 data type. Only one number is supported.

Raises:
TypeError: If neither `seed` nor `seed2` is an int.
TypeError: If `shape` is not a tuple.
TypeError: If neither `minval` nor `maxval` is a Tensor.
ValueError: If `shape` is not a constant value.

Outputs:
Tensor. The shape is the same as the input 'shape', and the data type is int32.

@@ -320,6 +346,11 @@ class UniformReal(PrimitiveWithInfer):
Outputs:
Tensor. The shape that the input 'shape' denotes. The dtype is float32.

Raises:
TypeError: If neither `seed` nor `seed2` is an int.
TypeError: If `shape` is not a tuple.
ValueError: If `shape` is not a constant value.

Supported Platforms:
``Ascend`` ``GPU``

@@ -378,6 +409,11 @@ class RandomChoiceWithMask(PrimitiveWithInfer):
- **index** (Tensor) - The output shape is 2-D.
- **mask** (Tensor) - The output shape is 1-D.

Raises:
TypeError: If `count` is not an int.
TypeError: If neither `seed` nor `seed2` is an int.
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU``

@@ -428,6 +464,11 @@ class RandomCategorical(PrimitiveWithInfer):
Outputs:
- **output** (Tensor) - The output Tensor with shape [batch_size, num_samples].

Raises:
TypeError: If `dtype` is not one of the following: mindspore.int16, mindspore.int32, mindspore.int64.
TypeError: If `logits` is not a Tensor.
TypeError: If neither `num_sample` nor `seed` is an int.

Supported Platforms:
``Ascend`` ``GPU``

@@ -499,6 +540,11 @@ class Multinomial(PrimitiveWithInfer):
Outputs:
Tensor with the same rows as input, each row has num_samples sampled indices.

Raises:
TypeError: If neither `seed` nor `seed2` is an int.
TypeError: If `input` is not a Tensor whose dtype is float32.
TypeError: If dtype of `num_samples` is not int32.

Supported Platforms:
``GPU``

@@ -566,6 +612,12 @@ class UniformCandidateSampler(PrimitiveWithInfer):
- **sampled_expected_count** (Tensor) - The expected counts under the sampling distribution of
each of sampled_candidates. Shape: (num_sampled, ).

Raises:
TypeError: If neither `num_true` nor `num_sampled` is an int.
TypeError: If neither `unique` nor `remove_accidental_hits` is a bool.
TypeError: If neither `range_max` nor `seed` is a int.
TypeError: If `true_classes` is not a Tensor.

Supported Platforms:
``GPU``

@@ -629,6 +681,12 @@ class LogUniformCandidateSampler(PrimitiveWithInfer):
- **true_expected_count** (Tensor) - A Tensor with the same shape as `true_classes and` type float32.
- **sampled_expected_count** (Tensor) - A Tensor with the same shape as `sampled_candidates` and type float32.

Raises:
TypeError: If neither `num_true` nor `num_sampled` is an int.
TypeError: If `unique` is not a bool.
TypeError: If neither `range_max` nor `seed` is an int.
TypeError: If `true_classes` is not a Tensor.

Supported Platforms:
``Ascend``



Loading…
Cancel
Save