203 Commits (bada826b18f3e4dc8aac0f82af3457794de9c7bb)

Author SHA1 Message Date
  WilliamLian 0179724dcd spilit unspported transdata to two transdata from special format -> defualt -> default -> special 5 years ago
  mindspore-ci-bot 568da0d510 !3425 fix avgpoolgrad 5 years ago
  mindspore-ci-bot 8e3d788942 !3349 move the dependency of utils to core 5 years ago
  fangzehua 86dd0c583a fix avgpool grad 5 years ago
  mindspore-ci-bot 295038d346 !3324 add reduce_any op for vm 5 years ago
  liubuyu f4bc0bc9fe move the dependency of utils to core 5 years ago
  chenfei 1f1a07e645 don't insert assign from condition to true branch of while 5 years ago
  huanghui 6316a03c67 deal tuple getitem control for new added memcpy 5 years ago
  root 1b6f85dec8 split tuple parameter to parameters 5 years ago
  fangzehua 228a959cc7 add reduce any op for vm 5 years ago
  yujianfeng 4d18e9ec35 Fix internal multiple outputs check 5 years ago
  huanghui f1563d2d37 insert memcpy async if hccl op cascade 5 years ago
  lizhenyu c67e562373 refine GPU memory swap performance 5 years ago
  mindspore-ci-bot 11145b0987 !3216 Add op mapping attr for those opt pass worked in LeNet 5 years ago
  mindspore-ci-bot cfafdcbcf0 !3246 refine gpu memory swap performance 5 years ago
  lizhenyu 3ace75509b refine gpu memory swap performance 5 years ago
  huanghui b25e114840 add op mapping attr for those pass worked in LeNet 5 years ago
  mindspore-ci-bot 72a2b7d496 !3117 not reuse ref node input's memory 5 years ago
  mindspore-ci-bot bae2f964e5 !3213 Unified code style 5 years ago
  mindspore-ci-bot a009823498 !3223 clean review bot warning 5 years ago
  mindspore-ci-bot 485ac8384b !3162 split tuple output node to maketuple 5 years ago
  mindspore-ci-bot 28f873e9ad !3203 GPU fix cast fusion bug 5 years ago
  WilliamLian 6ed2e636e1 clean review bot warning 5 years ago
  laiyongqiang acba03b191 not reuse ref node input's memory 5 years ago
  liubuyu 76dc80e7b7 Unified code style 5 years ago
  WilliamLian d10d1a17f0 spilt valuenode & parameter's tuple output to maketuple 5 years ago
  mindspore-ci-bot ab53809f2c !3196 fix precision error with fp16 input on pynative mode 5 years ago
  mindspore-ci-bot 6f8863b65d !3198 synchronize latest Ascend software suite 18 Jul 2020, and merging branches 5 years ago
  yanghaoran 859acc6d2a synchronize latest Ascend software suite 18 Jul 2020, and merging branches 5 years ago
  mindspore-ci-bot 419022f2a5 !3194 Add concat and pack fission pass 5 years ago
  VectorSL 80ed8e0e5c fix gpu cast fusion bug 5 years ago
  chujinjin 1f809f50e5 fix precision error with fp16 input on PyNative mode 5 years ago
  yujianfeng fa0684d12d Add pack and concat fission pass 5 years ago
  mindspore-ci-bot 8a8de7e062 !3171 gpu fix the graph of 'nop node + depend + node' 5 years ago
  laiyongqiang b570dec7ab add right align border for communication op's single output 5 years ago
  limingqi107 a596dd6e43 gpu fix the graph of 'nop node + depend + node' 5 years ago
  yujianfeng 188d74f15e Remove transdata and cast for internal outputs 5 years ago
  mindspore-ci-bot 7233d650f0 !3063 Enable to train in parameter server mode 5 years ago
  ZPaC 52022c8013 Enable to train in parameter server mode 5 years ago
  mindspore-ci-bot ae50c37c38 !3092 GPU add fuison: replace momentum cast 5 years ago
  VectorSL 140174182d gpu add fusion: replace momentum cast 5 years ago
  mindspore-ci-bot b64fca6e05 !3091 GPU add fusion: replace batchnormgrad cast 5 years ago
  mindspore-ci-bot 0c7811522f !3090 GPU add fusion 5 years ago
  changzherui f4cb445ea8 syn code for 0715 5 years ago
  mindspore-ci-bot a581766be4 !3069 Enable optimizer parallel with broadcast 5 years ago
  mindspore-ci-bot 25ee322ba3 !2966 reuse communication op output's memory 5 years ago
  laiyongqiang 68c78ab6bb reuse communication op output's memory 5 years ago
  Ziyan 39f08eb7dd enable optimizer parallel 5 years ago
  VectorSL 4cf7faeae6 gpu add fusion: replace batchnorm grad cast 5 years ago
  VectorSL 072b09b3fd gpu add fusion: 1 replace bn cast 2 replace addn by tensoradd 5 years ago