54 Commits (5cccfbc61ba4e67de63eecacd564373b7ddb0e3a)

Author SHA1 Message Date
  lizhenyu 1b4a7cdeb7 fix mem swap bug 5 years ago
  lizhenyu a3e8c7438f remove useless code 5 years ago
  chenjianping 303aa71c5a build _ms_mpi with mpi_initializer.cc 5 years ago
  chenjianping 343889cdb7 building _ms_mpi with mpi_interface 5 years ago
  gong chen a6dfa281ea Init GraphKernel. 5 years ago
  limingqi107 0f4397cece fix all nop node graph execute 5 years ago
  mindspore-ci-bot 4642df207a !2210 gpu optimize the max device memory config 5 years ago
  limingqi107 55b3557c0d gpu optimize the max device memory config 5 years ago
  limingqi107 20083679a0 gpu memreuse supports summary node 5 years ago
  mindspore-ci-bot ee656d2141 !2101 Fix code review warning. 5 years ago
  ZPaC 2f2cae3c61 Fix code review warning in gpu_stream_assign.cc 5 years ago
  limingqi107 da2b93916e gpu device code review 5 years ago
  lizhenyu 2d16316c55 fix code review warnings 5 years ago
  mindspore-ci-bot 259341b9ba !1985 GPU update datatype info for akg kernel 5 years ago
  mindspore-ci-bot c3d78e2aa2 !2012 gpu support the max device memory config 5 years ago
  limingqi107 5054b53f47 gpu support the max device memory config 5 years ago
  lizhenyu 0a55ebf6e9 fix code review defects 5 years ago
  VectorSL eb671300b5 gpu update datatype info for akg kernel 5 years ago
  chujinjin dde03ce944 add async ops excute for pynative 5 years ago
  chenjianping af8108c9e1 support host reduce scatter and allgather 5 years ago
  kswang c63729b8e6 support mix target 5 years ago
  lizhenyu fa4d9b2846 fix resource release bug of memory swap 5 years ago
  panfengfeng 636d419af3 gpu iterator weak ref opt 5 years ago
  lizhenyu fec235fcb5 change the default memory copy way to async 5 years ago
  ZPaC d9bcdac3dc Fix result error when calling AllReduce serially. 5 years ago
  lizhenyu 23a57476da change gpu kernel runtime to support memory swap 5 years ago
  lizhenyu c5ac2cc38c add memory copy module 5 years ago
  caifubi 5b963aef2b Change uintptr_t to void ptr 5 years ago
  VectorSL 9996e0d4d2 gpu update shape infer 5 years ago
  wilfChen ccf6dabe13 gpu queue support multi-inputs 5 years ago
  limingqi107 d9197b591a gpu optimize the use of reference count 5 years ago
  limingqi107 63f3a2caac gpu optimize some return values of dynamic memory pool 5 years ago
  lizhenyu 3a4c28fa33 change directory of akg cuda kernel 5 years ago
  limingqi107 664f2628e5 optimize gpu allReduce alloc memory performance 5 years ago
  limingqi107 0f0e8fe874 gpu dynamic memory pool can not reuse allReduce in multi-stream 5 years ago
  mindspore-ci-bot ca3aa6071a !527 gpu dynamic memory pool supports multi-allReduce 5 years ago
  limingqi107 2891f0d20d gpu dynamic memory pool supports multi-allReduce 5 years ago
  VectorSL b8d7cd9775 gpu change compute capacity strategy 5 years ago
  zhoufeng c2b3360d69 update clang format rule 5 years ago
  limingqi107 5e01b94ccd gpu dynamic memory pool suppoerts multi-graph 6 years ago
  ZPaC b8a9121597 Add GPU send and recv controlling kernels. 6 years ago
  mindspore-ci-bot 58b013c319 !363 clear the warmming scan by package 6 years ago
  chenzomi b77f41d658 clear the warmming scan by package 6 years ago
  chenjianping 1286767d0e support building on windows 6 years ago
  mindspore-ci-bot d90e121547 !306 gpu uses dynamic memory pool by default 6 years ago
  limingqi107 99f12f9105 gpu uses dynamic memory pool by default 6 years ago
  ZPaC 3ea3d9e5a4 1.GPU supports multiple streams. 6 years ago
  kswang b8a7e73f7d fix reshape output and clearres error 6 years ago
  kswang bef62db128 add ascend mem pool 6 years ago
  mindspore-ci-bot 31efc8b088 !172 add mem manager 6 years ago