mindspore-ci-bot
c967bf6846
!7339 fix for se-resnet50 accurancy
Merge pull request !7339 from qujianwei/master
5 years ago
panfengfeng
2d7b93e958
fix nn & operations api comments
5 years ago
mindspore-ci-bot
7152fe04be
!5783 GraphKernel supports GPU
Merge pull request !5783 from DeshiChen/graph_kernel_1.0
5 years ago
dayschan
37a48f6aac
GraphKernel supports GPU
1. Update akg submodule
2. Refactor akg_kernel_build, akg_ascend_kernel_build, akg_gpu_kernel_build
3. Add akg_kernel_json_decoder to support converting kernel_json to AnfNode.
4. Add GraphKernel Cost Model. (mindspore/_extends/graph_kernel)
5. Add some GraphKernel passes to GpuSession, move these passes to backend/optimizer/graph_kernel.
6. Add global id for ir files.
7. Fix bug in ConstInputToAttr.
5 years ago
fary86
759748dc06
Fix bugs of adam and lamb optimizer
5 years ago
simson
7cc48a9af8
Third round of enhancement of API comment & README_CN
5 years ago
simson
3617121ccf
revert modification of opt
5 years ago
Ziyan
98e2ee90de
fix optimizer parallel problems
5 years ago
wangnan39@huawei.com
2b182633e9
delete annotation of decay filter in optimizers
5 years ago
wangnan39@huawei.com
082433183d
uniform learning_rate behavior of optimizers
5 years ago
simson
a1f789640b
fix doc error of optim API
5 years ago
Ziyan
0925e35252
enable optimizer parallel with broadcast
5 years ago
gong chen
a6dfa281ea
Init GraphKernel.
- It provides a unified style to express graph and kernel for user.
- It provides a unified IR to represent graph and kernel for developer.
- It breaks the boundary between graph and kernel.
- It provides more opportunities to do compile optimization.
5 years ago
guohongzilong
1702bdfc21
change multitpefungraph to internal interface
5 years ago
liuxiao
52790b74e6
Add some description to API about optimizer.
5 years ago
wangnan39@huawei.com
c9b7d95c2c
fix lr check bug in AdamWeightDecayDynamicLR
5 years ago
BowenK
96379faa3a
Remove ZerosLikeTensor and sub with ZerosLike
5 years ago
mindspore-ci-bot
2a6a3e012c
!1555 fix bug in lamb warmup step check
Merge pull request !1555 from wangnan39/fix_bug_in_check_lamb_warmup_step
5 years ago
wangnan39@huawei.com
810ccf80d8
fix_bug_in_check_lamb_warmup_step
5 years ago
chenhaozhe
f65913d62a
fix performance of bert
5 years ago
mindspore-ci-bot
deae380969
!637 Learning rate and weight decay making group params
Merge pull request !637 from ghzl/learning-rate-make-group-mode
6 years ago
guohongzilong
824bc30a94
learning rate and weight decay support group mode
6 years ago
simson
bd2fd31ab3
revert limitation of end_learning_rate
6 years ago
Wei Luning
157710ca0f
bugfix* fix bug in output tuple of tuple.* check kRWWrite input no-variable* input x of ScatterNdUpdate should be a parameter node
6 years ago
jinyaohui
73642ef3d3
clean pylint
6 years ago
wangnan39@huawei.com
ddc558fd72
fix weight decay error in optimizer AdamWeightDecay
6 years ago
fary86
8cbbbd950e
Add cell name to error message
6 years ago
zhongligeng
144a636b51
resolve some issue in nn comments
6 years ago
zhunaipan
930a1fb0a8
initial version
Signed-off-by: leonwanghui <leon.wanghui@huawei.com>
6 years ago