Browse Source

!13554 fix the bug that the data type of float16 and float32 of SeLU is only supported.

From: @wangshuide2020
Reviewed-by: @liangchenghui
Signed-off-by: @liangchenghui
pull/13554/MERGE
mindspore-ci-bot Gitee 4 years ago
parent
commit
a56bccc4a4
1 changed files with 7 additions and 2 deletions
  1. +7
    -2
      mindspore/ops/operations/nn_ops.py

+ 7
- 2
mindspore/ops/operations/nn_ops.py View File

@@ -428,6 +428,11 @@ class SeLU(PrimitiveWithInfer):
\text{alpha} * (\exp(x_i) - 1), &\text{otherwise.}
\end{cases}

where :math:`alpha` and :math:`scale` are pre-defined constants(:math:`alpha=1.67326324`
and :math:`scale=1.05070098`).

See more details in `Self-Normalizing Neural Networks <https://arxiv.org/abs/1706.02515>`_.

Inputs:
- **input_x** (Tensor) - The input tensor.

@@ -438,7 +443,7 @@ class SeLU(PrimitiveWithInfer):
``Ascend``

Raise:
TypeError: If num_features data type not int8, int32, float16 and float32 Tensor.
TypeError: If dtype of `input_x` is neither float16 nor float32.

Examples:
>>> input_x = Tensor(np.array([[-1.0, 4.0, -8.0], [2.0, -5.0, 9.0]]), mindspore.float32)
@@ -458,7 +463,7 @@ class SeLU(PrimitiveWithInfer):
return x_shape

def infer_dtype(self, x_dtype):
valid_dtypes = [mstype.int8, mstype.int32, mstype.float16, mstype.float32]
valid_dtypes = [mstype.float16, mstype.float32]
validator.check_tensor_dtype_valid('x', x_dtype, valid_dtypes, self.name)
return x_dtype



Loading…
Cancel
Save