This website works better with JavaScript.
Home
Issues
Pull Requests
Milestones
AI流水线
Repositories
Datasets
Forum
实训
竞赛
大数据
AI开发
Register
Sign In
Huawei_Technology
/
mindspore
Not watched
Unwatch
Watch all
Watch but not notify
1
Star
0
Fork
0
Code
Releases
13
Wiki
evaluate
Activity
Issues
0
Pull Requests
0
Datasets
Model
Cloudbrain
HPC
Browse Source
!11343
fix bugs of op BatchNorm1d, FastGelu in API
From:
@lihongkang1
Reviewed-by: @liangchenghui,@wuxuejian Signed-off-by:
@liangchenghui
tags/v1.2.0-rc1
mindspore-ci-bot
Gitee
4 years ago
parent
7de625fbca
ae325f2e53
commit
e82c53b5ae
2 changed files
with
2 additions
and
2 deletions
Split View
Diff Options
Show Stats
Download Patch File
Download Diff File
+1
-1
mindspore/nn/layer/activation.py
+1
-1
mindspore/nn/layer/normalization.py
+ 1
- 1
mindspore/nn/layer/activation.py
View File
@@ -392,7 +392,7 @@ class GELU(Cell):
class FastGelu(Cell):
r"""
f
ast Gaussian error linear unit activation function.
F
ast Gaussian error linear unit activation function.
Applies FastGelu function to each element of the input. The input is a Tensor with any valid shape.
+ 1
- 1
mindspore/nn/layer/normalization.py
View File
@@ -267,7 +267,7 @@ class BatchNorm1d(_BatchNorm):
Tensor, the normalized, scaled, offset tensor, of shape :math:`(N, C_{out})`.
Supported Platforms:
``Ascend``
``Ascend``
``GPU``
Examples:
>>> net = nn.BatchNorm1d(num_features=4)
Write
Preview
Loading…
Cancel
Save