kswang
|
e9067b4a10
|
add internal output
|
5 years ago |
mindspore-ci-bot
|
9b326fe8c5
|
!2625 Fix code review problems of session
Merge pull request !2625 from Margaret_wangrui/review_code_second
|
5 years ago |
Margaret_wangrui
|
3ea75e0d98
|
review code second
|
5 years ago |
limingqi107
|
4949f6ca39
|
optimize the graph output of all nop node
|
5 years ago |
Shida He
|
4c056855e0
|
Implementation for mindspore debugger
|
5 years ago |
limingqi107
|
7dca9bfb37
|
gpu add the graph cache of pynative mode
|
5 years ago |
chujinjin
|
dde03ce944
|
add async ops excute for pynative
|
5 years ago |
kswang
|
c63729b8e6
|
support mix target
|
5 years ago |
mindspore-ci-bot
|
6cf5d0d543
|
!1580 Release runtime resource in KernelGraph Destructor
Merge pull request !1580 from caifubi/fix-multi-graph-device-resource-bug
|
5 years ago |
caifubi
|
5f2bf782f1
|
fix run out of runtime resource bug
|
5 years ago |
leopz
|
4508134ceb
|
add tensor_minnie and separate py from ir
|
5 years ago |
zhoufeng
|
f868a2855f
|
Insert assign nodes for linking sub graph
Signed-off-by: zhoufeng <zhoufeng54@huawei.com>
|
5 years ago |
caifubi
|
7d07e17f5a
|
Support Common Hccl Op
1.Support Broadcast op
2.Support communication op as graph output
3.Optimize Communication op memory alocation
4.support hccl multi-group
|
5 years ago |
limingqi107
|
2891f0d20d
|
gpu dynamic memory pool supports multi-allReduce
|
5 years ago |
lvliang
|
0e4824cd89
|
pynative-support-topk-and-print
|
5 years ago |
kswang
|
bef62db128
|
add ascend mem pool
|
6 years ago |
kswang
|
fb343bd607
|
add mem manager
|
6 years ago |
kswang
|
04be6a37f0
|
add getptr for memreuse
|
6 years ago |
zhunaipan
|
930a1fb0a8
|
initial version
Signed-off-by: leonwanghui <leon.wanghui@huawei.com>
|
6 years ago |