mindspore-ci-bot
8945884137
!8990 close BatchMatmul and ReduceSum in graph kernel
From: @looop5
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago
mindspore-ci-bot
22d683a805
!8920 Adapt ops LinSpace for Ascend.
From: @liu_xiao_93
Reviewed-by: @liangchenghui,@linqingke,@liangchenghui
Signed-off-by: @liangchenghui,@liangchenghui
5 years ago
mindspore-ci-bot
ddff3c4277
!8969 [bug_fix]GPU distributed training occur core dump when memory is not enough
From: @zyli2020
Reviewed-by: @limingqi107,@cristoval
Signed-off-by: @cristoval
5 years ago
mindspore-ci-bot
42cbdfcafc
!8903 add trace info when mindspore error
From: @jjfeing
Reviewed-by: @chujinjin,@zh_qh,@kisnwang
Signed-off-by: @zh_qh
5 years ago
looop5
1b36f454b8
close BatchMatmul and ReduceSum in graph kernel
5 years ago
liuxiao93
584e241e29
Adapt ops LinSpace for Ascend.
5 years ago
mindspore-ci-bot
08dc1481c7
!8918 checkcircle should consider the edges coming from controldepent nodes
From: @lingyunli63
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
661b6073a4
!8921 fix return scalar
From: @jjfeing
Reviewed-by: @kisnwang,@chujinjin
Signed-off-by: @chujinjin
5 years ago
jjfeing
27257b9901
add trace when mindspore error
5 years ago
lizhenyu
6f6a0dfd7a
[bug_fix]GPU distributed training occur core dump when memory is not enough
5 years ago
lingyunli63
e6a5fc0739
consider controldepend edges in checkcircle
5 years ago
jjfeing
a607890256
fix_return_scalar
5 years ago
mindspore-ci-bot
3f75f13556
!8648 PyNative Performance Optimization
From: @jojobugfree
Reviewed-by:
Signed-off-by:
5 years ago
caifubi
c7d6997819
pynative host device parallel
5 years ago
mindspore-ci-bot
420a4dc162
!8857 fix bug of change axis of reduce kernel when format is 6hd or fracz
From: @lianliguang
Reviewed-by: @kisnwang,@jjfeing
Signed-off-by: @jjfeing
5 years ago
lizhenyu
094f0b2a07
bugfix:fused batch norm op's input channel nums should be a multiple of 4
5 years ago
LianLiguang
29d585385e
fix bug of change axis reduce
5 years ago
laiyongqiang
978b7e2e18
fix codex and review bot warning
5 years ago
fangzehua
69ce58425d
fix reshape dynamic and emb
5 years ago
mindspore-ci-bot
fb0e866ad1
!8269 forward unique dynamic shape
From: @yao_yf
Reviewed-by:
Signed-off-by:
5 years ago
yao_yf
31819bb4a7
support forward unique
5 years ago
mindspore-ci-bot
286f5b05f7
!8493 【GraphKernel】Fuse composite ops separated by GetItem nodes
From: @dayschan
Reviewed-by: @ckey_dou
Signed-off-by: @ckey_dou
5 years ago
mindspore-ci-bot
9969c83f75
!8689 [GraphKernel] Split shape ops for more fusion opportunity.
From: @tronzhang
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago
liuxiao93
2aaf5e2e1b
Adapt DynamicGRUV2Grad for Ascend new backend.
5 years ago
dayschan
8e6d92eac9
Fuse composite ops separated by GetItem nodes
5 years ago
mindspore-ci-bot
381455638a
!8600 make gaps not lifelong: constraints from neighbor
From: @laiyongqiang
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
05f858e3d6
!8597 remove semi-lifelong for communication op's input's memory
From: @laiyongqiang
Reviewed-by: @kisnwang
Signed-off-by:
5 years ago
mindspore-ci-bot
dcff06a5d6
!8576 【GraphKernel】Disable all simplify pattern except SimplifyReduce in arithmetic_simplify.
From: @dayschan
Reviewed-by:
Signed-off-by:
5 years ago
tronzhang
9d7494f4df
split shape ops for more fusion pportunity.
5 years ago
mindspore-ci-bot
66c6fe3d6a
!8603 handle getnext output tensor as normal lifelong tensor
From: @laiyongqiang
Reviewed-by: @kisnwang
Signed-off-by:
5 years ago
mindspore-ci-bot
07633f2a2a
!8601 independent node use the somas's memory
From: @laiyongqiang
Reviewed-by: @kisnwang,@jjfeing
Signed-off-by: @jjfeing
5 years ago
dayschan
a8bb28437c
Temporarily disable all simplify pattern except SimplifyReduce, for some bizarre errors occurs in arithmetic simplify.
5 years ago
liubuyu
5bf70b24bb
adjust dynamic_rnn_grad_fission_v2 position
5 years ago
mindspore-ci-bot
b411c05de0
!8495 Adapt DynamicGRUV2 forward for Ascend new backend.
From: @liu_xiao_93
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
df1d5333db
!8519 Migrate current graph kernel passes to Ascend back-end
From: @looop5
Reviewed-by:
Signed-off-by:
5 years ago
mindspore-ci-bot
24d04b1cb1
!8578 dynamic shape check
From: @wilfchen
Reviewed-by: @limingqi107,@cristoval
Signed-off-by: @cristoval
5 years ago
lingyunli63
a51465c78b
add graphkerneloptimize pass
align fuse_ops_fusion
align composite_ops_fusion
unify ops table
Init new_code's kernel_info with orig_node's kernel_info in function NewCNodeWithInfo
enable run bert
add pass tensor_promotion
add macro for bias_add and bias_add_grad in expander pass
exclude unused attrs in primitive compare for GraphKernelCSE
exclude fusion_type in kernelinfo cmp for cse in graphkernel
check processor
remove graph kernel pass before select kernel
recover run_standalone_pretrain_ascend.sh
remove is_before_kernel_select
move add_atomic_clean from pass directory to graph_kernel directory
update fuse op list in Ascend back-end
5 years ago
Ioannis Lamprou
7e72605fc8
make gaps not lifelong: constraints from neighbor
5 years ago
laiyongqiang
b8821bb2f3
handle getnext output tensor as normal lifelong tensor
5 years ago
laiyongqiang
222159599a
independent node use the somas's memory
5 years ago
laiyongqiang
42c3830938
remove lifelong for communication output
5 years ago
mindspore-ci-bot
3c6b91f6fa
!8548 handle getNextOutput and contiguous conflict
From: @laiyongqiang
Reviewed-by: @kisnwang,@jjfeing
Signed-off-by: @jjfeing
5 years ago
wilfChen
2291b7f2e6
dynamic shape check
5 years ago
liuxiao93
d471ac491e
Adapt DynamicGRUV2 forward for Ascend new backend.
5 years ago
laiyongqiang
0fdc6b547d
handle getNextOutput and contiguous conflict
5 years ago
mindspore-ci-bot
dbe5229c56
!8492 expand maxmium_grad minimum_grad and dropout_grad
From: @zengzitao
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago
mindspore-ci-bot
e65c68a723
!8475 DynamicRNNGrad run error: cast Int32Imm
From: @gaojing22
Reviewed-by: @yingjy
Signed-off-by: @yingjy
5 years ago
zengzitao
28f1db74dd
expand maximum_grad minimum_grad dropout_grad op
5 years ago
mindspore-ci-bot
7b70c17fc0
!8449 【GraphKernel】Add Transpose into fusible list; Update akg submodule.
From: @dayschan
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago
mindspore-ci-bot
b078954667
!8389 remove multiple circles
From: @lingyunli63
Reviewed-by: @gaoxiong1,@ckey_dou
Signed-off-by: @ckey_dou
5 years ago