He Wei
7d9a783993
[auto-monad] Support side-effects by auto-monad
The basic idea is: exploits data dependency to control the execution order
of side-effect operations, and keep the semantics of ANF unchanged.
The ControlDepend primitive is removed and there are two primitives added:
1. UpdateState:
```
a = Assign(para, value)
```
became:
```
a = Assign(para, value, u)
u = UpdateState(u, a)
```
2. Load:
```
x = Add(para, value)
```
became:
```
p = Load(para, u)
x = Add(p, value)
u = UpdateState(u, p)
```
5 years ago
mindspore-ci-bot
ad5b033cc5
Change L2Norm, r1.1 to master
5 years ago
mindspore-ci-bot
9fa0499fa0
Change GatherV2 to Gather r1.1 to master
5 years ago
lizhenyu
f17534af08
ps cache support sparse
5 years ago
alouhahaha
e7a56c9fcb
fix bug for AllReduce
5 years ago
alouhahaha
580138f1fe
bug fix for scalar input HCCL ops
5 years ago
liubuyu
39cc9e70cd
dynamic op re primitive when infer
5 years ago
yujianfeng
7b412d7cb2
add recompute nodes
5 years ago
LianLiguang
414f38df8d
do not merge tensor move to one in cse pass
5 years ago
mindspore-ci-bot
186f5394ba
!9155 add_conv3d_object
From: @jiangzg001
Reviewed-by:
Signed-off-by:
5 years ago
jiangzhenguang
c45eee9b94
add conv3d object
5 years ago
mindspore-ci-bot
6acf699302
!9422 Add dynamic supports to op allreduce and reducesum on gpu
From: @yuan_shen_zhou
Reviewed-by: @liangchenghui,@linqingke
Signed-off-by: @liangchenghui
5 years ago
zhouyuanshen
458f0e7c58
dynamic shape adapting for allreduce and reducesum
5 years ago
alouhahaha
7265a49400
add allgather fusion
5 years ago
huanghui
e17dd84c0b
add trace managager around backend opt
5 years ago
TFbunny
5e19a642f9
fix and add testcase for dynamic shape scatteradd/update transpose
5 years ago
jjfeing
a607890256
fix_return_scalar
5 years ago
fangzehua
69ce58425d
fix reshape dynamic and emb
5 years ago
mindspore-ci-bot
fb0e866ad1
!8269 forward unique dynamic shape
From: @yao_yf
Reviewed-by:
Signed-off-by:
5 years ago
yao_yf
31819bb4a7
support forward unique
5 years ago
lingyunli63
a51465c78b
add graphkerneloptimize pass
align fuse_ops_fusion
align composite_ops_fusion
unify ops table
Init new_code's kernel_info with orig_node's kernel_info in function NewCNodeWithInfo
enable run bert
add pass tensor_promotion
add macro for bias_add and bias_add_grad in expander pass
exclude unused attrs in primitive compare for GraphKernelCSE
exclude fusion_type in kernelinfo cmp for cse in graphkernel
check processor
remove graph kernel pass before select kernel
recover run_standalone_pretrain_ascend.sh
remove is_before_kernel_select
move add_atomic_clean from pass directory to graph_kernel directory
update fuse op list in Ascend back-end
5 years ago
wilfChen
e3a7b7ab92
gpu support dynamic shape
5 years ago
mindspore-ci-bot
baf6a059cf
!8344 add dynamic shape of row-split operations
From: @hwjiaorui
Reviewed-by:
Signed-off-by:
5 years ago
hwjiaorui
94cfd2e8ff
row split dynamic shape infer function
int to long
int to long
modify __init__.py
register row-split dynamic shape operations
style
style
delete unuse
long to int64
add check
format
5 years ago
lingyunli63
b3d76c6e3e
exclude unused attrs and fusion_type in cse cmp
5 years ago
Yi Huaijie
d7faa77b5e
support int64 shape
5 years ago
kswang
a98f871fe4
optimize internaloutput for const-input-to-tensor pass
5 years ago
mindspore-ci-bot
b64de9c1dc
!8000 Add supports to op Gather on gpu
Merge pull request !8000 from zhouyuanshen/gather
5 years ago
zhouyuanshen
f0f67b8aa8
add gather op on gpu
5 years ago
danishnxt
34cc178bd0
New New UnsortedSegmentMax for GPU [API][CUDA_KERNEL]
PyLintFix
header fix
5 years ago
lingyunli63
a500a57c72
add GraphkernelCSE
5 years ago
yujianfeng
ae6942ff9f
Fix rebuilding nodes when eliminating redundant op
5 years ago
caifubi
d3b978147f
Ascend Dynamic Shape
5 years ago
mindspore-ci-bot
ff42cd87b2
!6247 Fix cpu ScatterNdUpdate doesn't update output
Merge pull request !6247 from huanghui/clear-warning
5 years ago
huanghui
d6944a70ca
fix cpu kernel:ScatterNdUpdate doesn't set output
5 years ago
buxue
08059f5c61
add check for stridedslice when choose aicpu or aicore
5 years ago
dayschan
37a48f6aac
GraphKernel supports GPU
1. Update akg submodule
2. Refactor akg_kernel_build, akg_ascend_kernel_build, akg_gpu_kernel_build
3. Add akg_kernel_json_decoder to support converting kernel_json to AnfNode.
4. Add GraphKernel Cost Model. (mindspore/_extends/graph_kernel)
5. Add some GraphKernel passes to GpuSession, move these passes to backend/optimizer/graph_kernel.
6. Add global id for ir files.
7. Fix bug in ConstInputToAttr.
5 years ago
huanghui
b8e737f66a
fix run error when there is a Depend or ControlDepend on BatchNorm
5 years ago
fary86
fcbb3e0edc
Refactor ms_context implementation
5 years ago
mindspore-ci-bot
be138e4618
!5430 Clean codex and reviewbot warnings
Merge pull request !5430 from huanghui/clear-warning
5 years ago
huanghui
998ff0399b
clear codex and reviewbot warning
5 years ago
chenlei_autodiff
660cefb6b1
clean code for graph kernel module.
5 years ago
mindspore-ci-bot
4e7e252747
!5024 fix bug of circle after opt
Merge pull request !5024 from lianliguang/fix-bug-of-circle-in-graph
5 years ago
WilliamLian
e95b42496c
fix circle bug of opt depend && merge cast
5 years ago
gukecai
6c22c8a09d
parallel ctrl
5 years ago
mindspore-ci-bot
73c4022ef4
!3775 remove the dtype convert when update output
Merge pull request !3775 from lianliguang/test-xiu-bug
5 years ago
WilliamLian
601b0b6e4d
remove convert datatype when updateoutputs &&
set parameter device dtype using it's infer dtype && set transdata's abstract
5 years ago
wenchunjiang
bde9c0c6a9
1.fix bug of backend common pass covert_tuple_output_to_maketuple
2.attach original inputs to graph when replace call and switch to
labelgoto and labelswitch
5 years ago
zhousiyi
d0e58dd765
remove ccsrc/common.h
replace frontend/operator/ops.h in backend with base/core_ops.h as
backend should not use any frontend-only primitive
5 years ago
zhoufeng
663278112f
optimize code compile performance
Signed-off-by: zhoufeng <zhoufeng54@huawei.com>
5 years ago