Zhang Qinghua
a137fa1d0b
Optimize the Executors routines.
- Fix the key generating.
- Distinguish the executors.
4 years ago
yujianfeng
712b9bd013
convert some ops bprops to mindir
4 years ago
ms_yan
36a8886ca2
Revert "[feat] [assistant] [I3T96T] add new Dataset operator CMUARCTICDataset"
This reverts commit b077aa1cab .
Revert "[feat] [assistant] [I3T96X] add new Dataset operator LibriSpeechDataset"
This reverts commit 4e6f7dc97d .
delete pass_registry_test.cc
comment hiai_nlu_model_multi.pb related line
4 years ago
djc
b077aa1cab
[feat] [assistant] [I3T96T] add new Dataset operator CMUARCTICDataset
4 years ago
djc
4e6f7dc97d
[feat] [assistant] [I3T96X] add new Dataset operator LibriSpeechDataset
4 years ago
yao_yf
5277b229be
add cell comm recompute interface
4 years ago
yujianfeng
cf4121126a
Only replace the primal user with tuple_getitem once
4 years ago
i-robot
ed4c9682b5
!20505 limit the scope of lift free variable
Merge pull request !20505 from xychow/limit-lift-scope
4 years ago
zhousiyi
597f29ea7d
limit the lambda lift scope to the func_graph passed
4 years ago
zhousiyi
b590f6d929
don't replace u with fprop_u in primal_graph and bprop_fg
4 years ago
zhousiyi
a5b4e7bbf8
lift fv before grad except weight, then convert
switch(cond, partial(g1, xs), partial(g2, ys))(Zs)
to
switch(cond, g1, g2)(Xs, Ys, Zs)
switch_layer(index, make_tuple(partial(g1, xs), partial(g2, ys)))(Zs)
to
switch_layer(index, make_tuple(g1, g2))(Xs, Ys, Zs)
put Zs at last when unifyparameter as it may have u-monad or io-monad
use joined args other than broadened one as some extra parameter which is not a parameter of while_header can be add to while_body
inline fprop_switch forcely
reorder the parameter if one of the parameter is Monad when incorporate call
incorporate switch tuple_getitem if item 0 of tuple is EnvInstance or
item 1 of tuple is bprop function
addn with shape() and shape(1)
remove context_ from FuncGraphEvaluator to make it re-entry able to resolve evaluator stuck issue because of re-entry of the same FuncGraphEvaluator
4 years ago
l00591931
8ae5d7cc84
Fix compile cache for resnet50
4 years ago
huanghui
53e32077c1
fix code check
4 years ago
zhousiyi
74d500a756
If a fv cnode is not mapped but belong to primal_graph_, then don't propagate this fv to caller, it should mapped instead
5 years ago
buxue
2551f5ea7e
support name or attribute ast.Expr
5 years ago
l00591931
680324f225
Change make tuple in core.ops
5 years ago
yujianfeng
70cc548e32
Fix calling recompute api after compiling
5 years ago
mindspore-ci-bot
b189f177bb
Change tuple_getitem to TupleGetItem and some other ops, merge from r1.1 to master
5 years ago
l00591931
9ec100d069
Change TensorAdd to Add, from r1.1 to master
5 years ago
yujianfeng
728fac6c9f
Expand J for innermost graph first when the graph also contains J primitive
5 years ago
yujianfeng
e1aa34a030
Ignore the graphs with defer_inline flag when specializing on graph arguments
5 years ago
mindspore-ci-bot
37e3b6082f
!6002 Reduce inline passes traversal by skipping inline action if no pre-ad python pass exists
Merge pull request !6002 from BowenK/pre_ad
5 years ago
Jiaqi
94d63b90f4
remove sens parameter
5 years ago
BowenK
e482e4e8bf
reduce pass traversals if no pre-ad python pass exists
5 years ago
BowenK
1bdb26f9e8
Warming up python pass by adding inline passes before it
5 years ago
BowenK
d6fb7d2db1
Remove debug drawing and printing to boost compile performance; re-opt after python pass to boost training; fix NewParameter tensor clone
5 years ago
BowenK
641d12d6d9
python pass pattern renaming and interface tweaking
5 years ago
mindspore-ci-bot
8d693306f4
!4126 Add new parameter
Merge pull request !4126 from BowenK/new_parameter
5 years ago
BowenK
e7c6b7e66a
Add NewParameter and Imm patterns
5 years ago
李嘉琪
c65ea1687b
modify error type
5 years ago
Wei Luning
d4d6457ea7
init parameter data by defaultOnly keep no data as MetaTensor in auto parallel mode
5 years ago
zhousiyi
7d31deb6fa
remove loss_scale range check to make FP32Imm(inf) comparison equal
5 years ago
BowenK
6d4c07c886
Update python pattern expression
5 years ago
wangnan39@huawei.com
082433183d
uniform learning_rate behavior of optimizers
5 years ago
changzherui
f4cb445ea8
syn code for 0715
5 years ago
BowenK
f267a105b8
Add Python Pass UT
5 years ago
changzherui
17da929b82
syn code 0706
5 years ago
guohongzilong
652093642e
change order param same as group params
5 years ago
jinyaohui
dd5fba1db9
add notice
5 years ago
yanghaoran
21c381b366
sync from mindspore to incubator
5 years ago
fary86
15b3fba0ef
Fix eliminate get ref parameter
5 years ago
mindspore-ci-bot
10fd781b15
!1831 Add order parameter function in group params
Merge pull request !1831 from ghzl/add-oder-parameters-in-group-functions
5 years ago
guohongzilong
85a06b00c6
add order function in group params
5 years ago
jinyaohui
5e43edc474
clean pylint
5 years ago
jonyguo
228061818c
Merge branch 'incubator-master' into sync_05177ff9_6b1715a7
5 years ago
jinyaohui
86d197dfeb
clean pylint
5 years ago
jonyguo
78cc0d5d8d
Merge branch 'incubator-master' into sync_d9c74e0a
5 years ago
buxue
e490618db8
support tensor get value by tensor index
support tensor set value by tensor index
5 years ago
mindspore-ci-bot
a2a3b1c6c5
!1089 Add Optimizer method get_lr_parameter
Merge pull request !1089 from ghzl/add-get-lr-base-on-parameter
5 years ago
guohongzilong
e70b2f5430
add optimizer.get_lr_parameter() method
5 years ago