Browse Source

Fix a doc issue in math.py

tags/v1.2.0-rc1
peixu_ren 5 years ago
parent
commit
42273bc13f
1 changed files with 38 additions and 18 deletions
  1. +38
    -18
      mindspore/nn/layer/math.py

+ 38
- 18
mindspore/nn/layer/math.py View File

@@ -201,15 +201,18 @@ class LGamma(Cell):
when x is an integer less or equal to 0, return +inf
when x = +/- inf, return +inf

Supported Platforms:
``Ascend`` ``GPU``

Inputs:
- **x** (Tensor) - The input tensor. Only float16, float32 are supported.

Outputs:
Tensor, has the same shape and dtype as the `x`.

Raises:
TypeError: If dtype of input x is not float16 nor float32.

Supported Platforms:
``Ascend`` ``GPU``

Examples:
>>> input_x = Tensor(np.array([2, 3, 4]).astype(np.float32))
>>> op = nn.LGamma()
@@ -317,15 +320,18 @@ class DiGamma(Cell):

digamma(x) = digamma(1 - x) - pi * cot(pi * x)

Supported Platforms:
``Ascend`` ``GPU``

Inputs:
- **x** (Tensor[Number]) - The input tensor. Only float16, float32 are supported.

Outputs:
Tensor, has the same shape and dtype as the `x`.

Raises:
TypeError: If dtype of input x is not float16 nor float32.

Supported Platforms:
``Ascend`` ``GPU``

Examples:
>>> input_x = Tensor(np.array([2, 3, 4]).astype(np.float32))
>>> op = nn.DiGamma()
@@ -579,9 +585,6 @@ class IGamma(Cell):

Above :math:`Q(a, x)` is the upper regularized complete Gamma function.

Supported Platforms:
``Ascend`` ``GPU``

Inputs:
- **a** (Tensor) - The input tensor. With float32 data type. `a` should have
the same dtype with `x`.
@@ -591,6 +594,13 @@ class IGamma(Cell):
Outputs:
Tensor, has the same dtype as `a` and `x`.

Raises:
TypeError: If dtype of input x and a is not float16 nor float32,
or if x has different dtype with a.

Supported Platforms:
``Ascend`` ``GPU``

Examples:
>>> input_a = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> input_x = Tensor(np.array([2.0, 3.0, 4.0, 5.0]).astype(np.float32))
@@ -660,9 +670,6 @@ class LBeta(Cell):
decomposing lgamma into the Stirling approximation and an explicit log_gamma_correction, and cancelling
the large terms from the Striling analytically.

Supported Platforms:
``Ascend`` ``GPU``

Inputs:
- **x** (Tensor) - The input tensor. With float16 or float32 data type. `x` should have
the same dtype with `y`.
@@ -672,6 +679,13 @@ class LBeta(Cell):
Outputs:
Tensor, has the same dtype as `x` and `y`.

Raises:
TypeError: If dtype of input x and a is not float16 nor float32,
or if x has different dtype with a.

Supported Platforms:
``Ascend`` ``GPU``

Examples:
>>> input_x = Tensor(np.array([2.0, 4.0, 6.0, 8.0]).astype(np.float32))
>>> input_y = Tensor(np.array([2.0, 3.0, 14.0, 15.0]).astype(np.float32))
@@ -967,9 +981,6 @@ class MatInverse(Cell):
"""
Calculates the inverse of Positive-Definite Hermitian matrix using Cholesky decomposition.

Supported Platforms:
``GPU``

Inputs:
- **a** (Tensor[Number]) - The input tensor. It must be a positive-definite matrix.
With float16 or float32 data type.
@@ -977,6 +988,12 @@ class MatInverse(Cell):
Outputs:
Tensor, has the same dtype as the `a`.

Raises:
TypeError: If dtype of input x is not float16 nor float32.

Supported Platforms:
``GPU``

Examples:
>>> input_a = Tensor(np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]]).astype(np.float32))
>>> op = nn.MatInverse()
@@ -1004,9 +1021,6 @@ class MatDet(Cell):
"""
Calculates the determinant of Positive-Definite Hermitian matrix using Cholesky decomposition.

Supported Platforms:
``GPU``

Inputs:
- **a** (Tensor[Number]) - The input tensor. It must be a positive-definite matrix.
With float16 or float32 data type.
@@ -1014,6 +1028,12 @@ class MatDet(Cell):
Outputs:
Tensor, has the same dtype as the `a`.

Raises:
TypeError: If dtype of input x is not float16 nor float32.

Supported Platforms:
``GPU``

Examples:
>>> input_a = Tensor(np.array([[4, 12, -16], [12, 37, -43], [-16, -43, 98]]).astype(np.float32))
>>> op = nn.MatDet()


Loading…
Cancel
Save