16 Commits (04bc2a938eef08dd1231a2a82f6d4e4e8dd258ea)

Author SHA1 Message Date
  chenhaozhe 04bc2a938e fix performance of bert 6 years ago
  mindspore-ci-bot 2fef359c4d !1396 Cancel NoOp optimizer on gpu backend until memory reuse ready 6 years ago
  wilfChen 56d751b5d2 Cancel NoOp optimizer in GPU until memory reuse ready 6 years ago
  huanghui 5a68eba585 Refactor LambNextMVWithDecayRule fusion pass 6 years ago
  mindspore-ci-bot a915cc3bd9 !1225 gpu NoOp optimizer 6 years ago
  wilfChen 23b4b4d106 Gpu NoOp optimizer 6 years ago
  huanghui d4a82951ed fix confusionmulgrad fusion pass may create a loop 6 years ago
  yujianfeng 4a0ddaef59 Support specifying reshape type for batchnorm fused op 6 years ago
  changzherui b323199dc1 syn code 430 6 years ago
  YuJianfeng dfa66e4d0c Check the empty value tuple when converting it to tuple tensor 6 years ago
  YuJianfeng ce2a13fcda Check topk supported before converting input to attr 6 years ago
  yanzhenxiang2020 691337a6e1 add aicpu ops of Reshape/Flatten/Squeeze/ExpandDims/IsFinite 6 years ago
  lianliguang 00e4306518 convert all tuple output to maketuple 6 years ago
  chenfei e017fd8916 share mem of paramter between child graph 6 years ago
  lvliang ffe8b5d3ec pynative-add-op-supported 6 years ago
  zhunaipan 930a1fb0a8 initial version 6 years ago