60 Commits (d99dfbd83dd6776e87f67c09eb5745ba31fa5732)

Author SHA1 Message Date
  gukecai f8208c7c52 Support GetNext Parallel 6 years ago
  limingqi107 5e01b94ccd gpu dynamic memory pool suppoerts multi-graph 6 years ago
  mindspore-ci-bot 093c2caed4 !337 optimize execute order sort 6 years ago
  mindspore-ci-bot b418b18447 !443 add cpu one hot 6 years ago
  kswang 6775190e48 add cpu one hot 6 years ago
  mindspore-ci-bot 7c06d292c8 !387 auto mix precision 6 years ago
  liubuyu 852e61d46c bug fix 6 years ago
  liubuyu b1585f862d auto mix precision 6 years ago
  liubuyu fc07cd908e add 6d format transfer 6 years ago
  kswang 83eeac9310 optimize execute order sort 6 years ago
  jjfeing f9ef78609f add nc1hwc0_c04 format 6 years ago
  caifubi bce5f57752 use GraphId as key of DavinciModel in ascend_kernel_runtime.cc 6 years ago
  ZPaC b8a9121597 Add GPU send and recv controlling kernels. 6 years ago
  mindspore-ci-bot 58b013c319 !363 clear the warmming scan by package 6 years ago
  chenzomi b77f41d658 clear the warmming scan by package 6 years ago
  mindspore-ci-bot f9dd47620c !300 refactor kernel select priority scheme 6 years ago
  mindspore-ci-bot 285e258b3b !347 fix e2e dump shape not match 6 years ago
  mindspore-ci-bot 4c3969e12a !351 fix bug of TensorAddGrad single op run fail 6 years ago
  jojobugfree 762bf9ac25 fix tensoradd grad op run fail 6 years ago
  dengwentao 593c4fc700 fix shape used for dump 6 years ago
  chenjianping 1286767d0e support building on windows 6 years ago
  lianliguang 5365678eee refactor kernel select 6 years ago
  mindspore-ci-bot d90e121547 !306 gpu uses dynamic memory pool by default 6 years ago
  mindspore-ci-bot 57aee805ce !274 GPU multiple stream feature 6 years ago
  limingqi107 99f12f9105 gpu uses dynamic memory pool by default 6 years ago
  ZPaC 3ea3d9e5a4 1.GPU supports multiple streams. 6 years ago
  mindspore-ci-bot 7a6fdaf132 !302 fix workspace memory reuse 6 years ago
  mindspore-ci-bot 39b9e831cb !291 disable memory reuse for GetNext op 6 years ago
  kswang e53376092f fix workspace reuse bug 6 years ago
  mindspore-ci-bot 9bda080bb5 !260 refactor padding strategy 6 years ago
  mindspore-ci-bot 8674e0ad96 !288 fix nopnode output bug 6 years ago
  mindspore-ci-bot 94589ce611 !226 expend conv stride and dilation to 2d 6 years ago
  jojobugfree 2aad57c595 getnext disable memory reuse 6 years ago
  kswang ae675c5cf8 fix nopnode output bug 6 years ago
  wangnan39@huawei.com 2604acedcb extend conv stride and dilation to 2d 6 years ago
  lianliguang 5d225f934f change the padding strategy & refactor insert transdata 6 years ago
  mindspore-ci-bot 18b9a0957e !203 fix reshape as output and release mem exception 6 years ago
  kswang b8a7e73f7d fix reshape output and clearres error 6 years ago
  lianliguang 3348e5a7c2 deal something special of adam's kernel select 6 years ago
  kswang bef62db128 add ascend mem pool 6 years ago
  mindspore-ci-bot 31efc8b088 !172 add mem manager 6 years ago
  kswang fb343bd607 add mem manager 6 years ago
  mindspore-ci-bot cc75cb357c !168 remove mindspore::make_unique and make_unique.h 6 years ago
  jojobugfree effdb483d6 profiling feature enhancement 6 years ago
  Alexey Shevlyakov 84d780c1a4 remove make_unique.h 6 years ago
  wanghua 5b176f258b modify bert test file 6 years ago
  wanghua da123c5b3e fix bert precison bug 6 years ago
  mindspore-ci-bot 3289d7bb69 !109 Fix some typo errors in session and device module 6 years ago
  mindspore-ci-bot 55916351ee !52 remove ge depend 6 years ago
  Wei Luning 73ba399364 remove ge depend in cpu 6 years ago