He Wei
7d9a783993
[auto-monad] Support side-effects by auto-monad
The basic idea is: exploits data dependency to control the execution order
of side-effect operations, and keep the semantics of ANF unchanged.
The ControlDepend primitive is removed and there are two primitives added:
1. UpdateState:
```
a = Assign(para, value)
```
became:
```
a = Assign(para, value, u)
u = UpdateState(u, a)
```
2. Load:
```
x = Add(para, value)
```
became:
```
p = Load(para, u)
x = Add(p, value)
u = UpdateState(u, p)
```
5 years ago
mindspore-ci-bot
9fa0499fa0
Change GatherV2 to Gather r1.1 to master
5 years ago
fangzehua
f97e19f23f
add cache pass
5 years ago
yanghaoran
58bae307d6
fix bugs
5 years ago
Ziyan
660f578988
fix standalone prediction
5 years ago
mindspore-ci-bot
00611b1c41
!10175 Add tensor.ndim and rename tensor.size() to tensor.size
From: @yanglf1121
Reviewed-by: @kingxian,@zhunaipan
Signed-off-by: @kingxian
5 years ago
Ziyan
c5c905fdf5
add restriction for opt shard
5 years ago
yanglf1121
918679daa3
add tensor.ndim and rename tensor.size() to tensor.size
5 years ago
lizhenyu
e3f7ae61db
add ps cache manager
5 years ago
caifubi
3033ae295c
Create Tensor for assignadd input in Optimizer
5 years ago
caifubi
702ab2bac2
Change const tensor dtype to fp16
5 years ago
mindspore-ci-bot
3f75f13556
!8648 PyNative Performance Optimization
From: @jojobugfree
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
d79a454564
!8890 Add labels to python files
From: @JunYuLiu
Reviewed-by: @gemini524
Signed-off-by:
5 years ago
mindspore-ci-bot
71a3086fff
!8261 gpu support heterogeneous network
From: @wilfchen
Reviewed-by: @limingqi107,@cristoval
Signed-off-by: @cristoval
5 years ago
JunYuLiu
1eaa4a30dd
Add labels to python files
5 years ago
caifubi
c7d6997819
pynative host device parallel
5 years ago
yao_yf
31819bb4a7
support forward unique
5 years ago
wilfChen
33b18ad83d
gpu support heterogeneous network
5 years ago
Jiaqi
b8a6c88323
api
5 years ago
Jiaqi
a30ccea62c
sparse optimizer
5 years ago
mindspore-ci-bot
c967bf6846
!7339 fix for se-resnet50 accurancy
Merge pull request !7339 from qujianwei/master
5 years ago
Ziyan
adc92496e8
disable comm fusion in parallel optimizer temp
5 years ago
Ziyan
ddc0113058
enable parallel optimizer in auto parallel
5 years ago
panfengfeng
2d7b93e958
fix nn & operations api comments
5 years ago
yao_yf
07117e4dd4
mv ParallelMode to context
5 years ago
simson
7cc48a9af8
Third round of enhancement of API comment & README_CN
5 years ago
panyifeng
34e50e5d6e
fix cell bprop
5 years ago
simson
a346a1b2b4
Second round of the enhancement of API comments
5 years ago
mindspore-ci-bot
8d6f5efaee
!3921 [bug][api]fit interface change on parameter
Merge pull request !3921 from vlne-v1/I1Q3KN-interface-changes-cause-network-training-failurs
5 years ago
Wei Luning
e5fc529159
z
5 years ago
simson
3617121ccf
revert modification of opt
5 years ago
Wei Luning
a05c38bb63
make python Parameter inherit from Tensor
5 years ago
Ziyan
98e2ee90de
fix optimizer parallel problems
5 years ago
wangnan39@huawei.com
4c40fd681e
fix bug of group lr
5 years ago
wangnan39@huawei.com
082433183d
uniform learning_rate behavior of optimizers
5 years ago
mindspore-ci-bot
57252dee24
!3191 Fix doc error of optim API
Merge pull request !3191 from Simson/doc-fix
5 years ago
simson
a1f789640b
fix doc error of optim API
5 years ago
mindspore-ci-bot
6f8863b65d
!3198 synchronize latest Ascend software suite 18 Jul 2020, and merging branches
Merge pull request !3198 from yanghaoran/code_sync_0718
5 years ago
yanghaoran
859acc6d2a
synchronize latest Ascend software suite 18 Jul 2020, and merging branches
5 years ago
Wei Luning
88e864a4a3
remove loop can unroll flag, clean some python usage
5 years ago
wangnan39@huawei.com
86889c59cb
optimizer adapt IndexedSlices
5 years ago
mindspore-ci-bot
4dc96564ac
!3036 opt adam
Merge pull request !3036 from jinyaohui/master
5 years ago
jinyaohui
1d66467d47
opt add ps logic
5 years ago
mindspore-ci-bot
74bbfa3cf6
!3095 modify the limit of loss scale
Merge pull request !3095 from Simson/push-to-opensource
5 years ago
simson
177e18f3f4
modify the limit of loss scale
5 years ago
Ziyan
39f08eb7dd
enable optimizer parallel
5 years ago
changzherui
17da929b82
syn code 0706
5 years ago
guohongzilong
de43c1eefa
fix group params order
5 years ago
guohongzilong
652093642e
change order param same as group params
5 years ago
wangnan39@huawei.com
172728a6a6
support weight decay for sparse optimizer
5 years ago