Browse Source

!12581 add GPU for DenseBnAct, Erfc, Log1p, etc. in Supported Platforms.

From: @wangshuide2020
Reviewed-by: @liangchenghui,@ljl0711
Signed-off-by: @liangchenghui
tags/v1.2.0-rc1
mindspore-ci-bot Gitee 5 years ago
parent
commit
c8657a6723
9 changed files with 42 additions and 22 deletions
  1. +1
    -0
      mindspore/nn/layer/basic.py
  2. +2
    -2
      mindspore/nn/layer/combined.py
  3. +2
    -2
      mindspore/nn/layer/math.py
  4. +3
    -3
      mindspore/nn/layer/pooling.py
  5. +2
    -2
      mindspore/nn/optim/lazyadam.py
  6. +14
    -2
      mindspore/ops/composite/base.py
  7. +8
    -9
      mindspore/ops/operations/math_ops.py
  8. +8
    -0
      mindspore/ops/operations/nn_ops.py
  9. +2
    -2
      tests/ut/python/ops/test_math_ops.py

+ 1
- 0
mindspore/nn/layer/basic.py View File

@@ -1007,6 +1007,7 @@ class MatrixSetDiag(Cell):
Assume `x` has :math:`k+1` dimensions :math:`[I, J, K, ..., M, N]` and `diagonal` has :math:`k`
dimensions :math:`[I, J, K, ..., min(M, N)]`. Then the output is a tensor of rank :math:`k+1` with dimensions
:math:`[I, J, K, ..., M, N]` where:

:math:`output[i, j, k, ..., m, n] = diagnoal[i, j, k, ..., n]\ for\ m == n`

:math:`output[i, j, k, ..., m, n] = x[i, j, k, ..., m, n]\ for\ m != n`


+ 2
- 2
mindspore/nn/layer/combined.py View File

@@ -1,4 +1,4 @@
# Copyright 2020 Huawei Technologies Co., Ltd
# Copyright 2020-2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -171,7 +171,7 @@ class DenseBnAct(Cell):
Tensor of shape :math:`(N, out\_channels)`.

Supported Platforms:
``Ascend``
``Ascend`` ``GPU``

Examples:
>>> net = nn.DenseBnAct(3, 4)


+ 2
- 2
mindspore/nn/layer/math.py View File

@@ -1,4 +1,4 @@
# Copyright 2020 Huawei Technologies Co., Ltd
# Copyright 2020-2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -930,7 +930,7 @@ class Moments(Cell):
- **variance** (Tensor) - The variance of input x, with the same date type as input x.

Supported Platforms:
``Ascend``
``Ascend`` ``GPU``

Examples:
>>> net = nn.Moments(axis=3, keep_dims=True)


+ 3
- 3
mindspore/nn/layer/pooling.py View File

@@ -1,4 +1,4 @@
# Copyright 2020 Huawei Technologies Co., Ltd
# Copyright 2020-2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -165,7 +165,7 @@ class MaxPool1d(_PoolNd):
Tensor of shape :math:`(N, C, L_{out}))`.

Supported Platforms:
``Ascend``
``Ascend`` ``GPU``

Examples:
>>> max_pool = nn.MaxPool1d(kernel_size=3, stride=1)
@@ -312,7 +312,7 @@ class AvgPool1d(_PoolNd):
Tensor of shape :math:`(N, C_{out}, L_{out})`.

Supported Platforms:
``Ascend``
``Ascend`` ``GPU``

Examples:
>>> pool = nn.AvgPool1d(kernel_size=6, stride=1)


+ 2
- 2
mindspore/nn/optim/lazyadam.py View File

@@ -1,4 +1,4 @@
# Copyright 2020 Huawei Technologies Co., Ltd
# Copyright 2020-2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -185,7 +185,7 @@ class LazyAdam(Optimizer):
Tensor[bool], the value is True.

Supported Platforms:
``Ascend``
``Ascend`` ``GPU``

Examples:
>>> net = Net()


+ 14
- 2
mindspore/ops/composite/base.py View File

@@ -1,6 +1,6 @@
# This is the Python adaptation and derivative work of Myia (https://github.com/mila-iqia/myia/).
#
# Copyright 2020 Huawei Technologies Co., Ltd
# Copyright 2020-2021 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -198,6 +198,12 @@ class GradOperation(GradOperation_):
Returns:
The higher-order function which takes a function as argument and returns gradient function for it.

Raises:
TypeError: If `get_all`, `get_by_list` or `sens_param` is not a bool.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

Examples:
>>> from mindspore.common import ParameterTuple
>>> class Net(nn.Cell):
@@ -377,7 +383,7 @@ class MultitypeFuncGraph(MultitypeFuncGraph_):

MultitypeFuncGraph is a class used to generate overloaded functions, considering different types as inputs.
Initialize an `MultitypeFuncGraph` object with name, and use `register` with input types as the decorator
for the function to be registed. And the object can be called with different types of inputs,
for the function to be registered. And the object can be called with different types of inputs,
and work with `HyperMap` and `Map`.

Args:
@@ -388,6 +394,9 @@ class MultitypeFuncGraph(MultitypeFuncGraph_):
Raises:
ValueError: If failed to find find a matching function for the given arguments.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

Examples:
>>> # `add` is a metagraph object which will add two objects according to
>>> # input type using ".register" decorator.
@@ -479,6 +488,9 @@ class HyperMap(HyperMap_):
Sequence or nested sequence, the sequence of output after applying the function.
e.g. `operation(args[0][i], args[1][i])`.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``

Examples:
>>> from mindspore import dtype as mstype
>>> nest_tensor_list = ((Tensor(1, mstype.float32), Tensor(2, mstype.float32)),


+ 8
- 9
mindspore/ops/operations/math_ops.py View File

@@ -1620,13 +1620,13 @@ class Exp(PrimitiveWithInfer):
out_i = e^{x_i}

Inputs:
- **input_x** (Tensor) - The input tensor. The data type mast be float16 or float32.
- **input_x** (Tensor) - The input tensor.

Outputs:
Tensor, has the same shape and dtype as the `input_x`.

Raises:
TypeError: If dtype of `input_x` is neither float16 nor float32.
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@@ -1648,7 +1648,7 @@ class Exp(PrimitiveWithInfer):
return x_shape

def infer_dtype(self, x_type):
validator.check_tensor_dtype_valid("x", x_type, [mstype.float16, mstype.float32], self.name)
validator.check_subclass("x", x_type, mstype.tensor, self.name)
return x_type

def infer_value(self, x):
@@ -1759,13 +1759,13 @@ class Log(PrimitiveWithInfer):
Returns the natural logarithm of a tensor element-wise.

Inputs:
- **input_x** (Tensor) - The input tensor. With float16 or float32 data type. The value must be greater than 0.
- **input_x** (Tensor) - The input tensor. The value must be greater than 0.

Outputs:
Tensor, has the same shape as the `input_x`.

Raises:
TypeError: If dtype of `input_x` is neither float16 nor float32.
TypeError: If `input_x` is not a Tensor.

Supported Platforms:
``Ascend`` ``GPU`` ``CPU``
@@ -1787,7 +1787,6 @@ class Log(PrimitiveWithInfer):

def infer_dtype(self, x):
validator.check_subclass("x", x, mstype.tensor, self.name)
validator.check_tensor_dtype_valid("x", x, [mstype.float16, mstype.float32], self.name)
return x

def infer_value(self, x):
@@ -1813,7 +1812,7 @@ class Log1p(PrimitiveWithInfer):
TypeError: If dtype of `input_x` is neither float16 nor float32.

Supported Platforms:
``Ascend``
``Ascend`` ``GPU``

Examples:
>>> input_x = Tensor(np.array([1.0, 2.0, 4.0]), mindspore.float32)
@@ -1895,7 +1894,7 @@ class Erfc(PrimitiveWithInfer):
TypeError: If dtype of `input_x` is neither float16 nor float32.

Supported Platforms:
``Ascend``
``Ascend`` ``GPU``

Examples:
>>> input_x = Tensor(np.array([-1, 0, 1, 2, 3]), mindspore.float32)
@@ -3959,7 +3958,7 @@ class SquareSumAll(PrimitiveWithInfer):
- **output_y2** (Tensor) - The same type as the `input_x1`.

Supported Platforms:
``Ascend``
``Ascend`` ``GPU``

Examples:
>>> input_x1 = Tensor(np.array([0, 0, 2, 0]), mindspore.float32)


+ 8
- 0
mindspore/ops/operations/nn_ops.py View File

@@ -6820,6 +6820,14 @@ class CTCGreedyDecoder(PrimitiveWithCheck):
- **log_probability** (Tensor) - A tensor with shape of (`batch_size`, 1),
containing sequence log-probability, has the same type as `inputs`.

Raises:
TypeError: If `merge_repeated` is not a bool.
ValueError: If length of shape of `inputs` is not equal to 3.
ValueError: If length of shape of `sequence_length` is not equal to 1.

Supported Platforms:
``Ascend``

Examples:
>>> inputs = Tensor(np.random.random((2, 2, 3)), mindspore.float32)
>>> sequence_length = Tensor(np.array([2, 2]), mindspore.int32)


+ 2
- 2
tests/ut/python/ops/test_math_ops.py View File

@@ -111,10 +111,10 @@ def test_pow():

def test_exp():
""" test_exp """
input_tensor = Tensor(np.array([[2, 2], [3, 3]], np.float32))
input_tensor = Tensor(np.array([[2, 2], [3, 3]]))
testexp = P.Exp()
result = testexp(input_tensor)
expect = np.exp(np.array([[2, 2], [3, 3]], np.float32))
expect = np.exp(np.array([[2, 2], [3, 3]]))
assert np.all(result.asnumpy() == expect)




Loading…
Cancel
Save