42 Commits (693b4cfdc8aaffd69014e94f5909115cd2b7bcff)

Author SHA1 Message Date
  tanghuikang dac64f30ee Support ms_function + heterogenous 5 years ago
  caifubi 171b468bb3 PyNative AllReduce Bucket 5 years ago
  zuochuanyong 3fa26683ac nlp perf(Pynative): change memory sync mode from synchronous to asynchronous in SyncHostToDevice 5 years ago
  lizhenyu f17534af08 ps cache support sparse 5 years ago
  yanghaitao1 8d147deb07 profiler memory 5 years ago
  yanghaoran b8345d03b6 Synchronize latest Ascend software 18 Dec 2020, with profiler fixes 5 years ago
  limingqi107 a844d52b42 add ps cache check 5 years ago
  lizhenyu e3f7ae61db add ps cache manager 5 years ago
  HulkTang c36b477568 Run ops one by one in pynative bp graph 5 years ago
  mindspore-ci-bot 0d49650bd5 !8888 Add PyNative Device Profiling 5 years ago
  caifubi 12643a33dd Add profiling reporter 5 years ago
  kswang 62ae6802dc fix context null error 5 years ago
  Harshvardhan Gupta 744355a005 remove dbg args from runtime and remove needless argument from IsWatchpoint 5 years ago
  caifubi 9b76f4ed57 get physical device for gpu 5 years ago
  caifubi d3b978147f Ascend Dynamic Shape 5 years ago
  kswang 11989b5e30 enable async run 5 years ago
  Harshvardhan Gupta 7c5e0541ba load inputs before suspending execution in dbg 5 years ago
  laiyongqiang bd8aeefd95 disable memory reuse for selected op in e2e dump 5 years ago
  mindspore-ci-bot c543db0585 !6180 clean codex warning in memreuse 5 years ago
  laiyongqiang 4063a69846 clean codex warning 5 years ago
  caifubi 372c2e7951 Combine Async Dump and E2E Dump 5 years ago
  lichen_101010 dffa61b228 send info when training is done 5 years ago
  John Tzanakakis b0a7ebdeb0 enable debugger by default and set correct log message severity 5 years ago
  mindspore-ci-bot 0c316e522d !5866 clean idle mem at proper time 5 years ago
  liangzelang 4c7291078c clean idle mem in the beginning of ascend session 5 years ago
  Zhang Qinghua c0070d3d49 Use the unified Execute function to run Graph or Single Op Graph. 5 years ago
  Zhang Qinghua 5ac60ff202 Move ascend dependent functions to ascend kernel runtime. 5 years ago
  limingqi107 7ec2f6a550 clear graph output address in graph destructor 5 years ago
  limingqi107 7029a861d7 add kernel release resource 5 years ago
  liubuyu d81862a916 decoupling core and context 5 years ago
  z00505269 87668d6ea2 remove predict 5 years ago
  lvchangquan fdbe4c19ba use kernel_runtime::mem_manager to reduce rtMalloc and rtFree time in trans data format 5 years ago
  liubuyu f4bc0bc9fe move the dependency of utils to core 5 years ago
  kpy 570da089a8 set output value for dynamic graph 5 years ago
  John Tzanakakis b3c0eb61d5 GPU debugger - milestone 1 and GPU dump 5 years ago
  mindspore-ci-bot 72a2b7d496 !3117 not reuse ref node input's memory 5 years ago
  mindspore-ci-bot bae2f964e5 !3213 Unified code style 5 years ago
  laiyongqiang acba03b191 not reuse ref node input's memory 5 years ago
  liubuyu 76dc80e7b7 Unified code style 5 years ago
  lvchangquan 7b48a122dd insert trans_data to reduce time in print process 5 years ago
  laiyongqiang 68c78ab6bb reuse communication op output's memory 5 years ago
  liubuyu 43c79eb853 mindspore path adjust 5 years ago