41 Commits (4e32dad730a58decaa72a67f27237f52ee3fe3e2)

Author SHA1 Message Date
  leopz a9e30a96db separate py from debug and utils 5 years ago
  mindspore-ci-bot 04398cf88e !1433 add tensor_minnie and separate py from ir 5 years ago
  leopz 4508134ceb add tensor_minnie and separate py from ir 5 years ago
  huanghui 1d65ae598a extract const_to_attr_strided_slice_grad pass 5 years ago
  zhaozhenlong 30b93ecbf8 use reshape as flatten grad 5 years ago
  mindspore-ci-bot daccfef738 !1361 Refactor multiple output pass 5 years ago
  mindspore-ci-bot 2fef359c4d !1396 Cancel NoOp optimizer on gpu backend until memory reuse ready 5 years ago
  wilfChen 56d751b5d2 Cancel NoOp optimizer in GPU until memory reuse ready 5 years ago
  huanghui f16ff539ba refactor multiple patterns pass 5 years ago
  etone-chan 391a1fa6ac add conv2dbackpropinput eltwise fusion pass 5 years ago
  huanghui 5a68eba585 Refactor LambNextMVWithDecayRule fusion pass 6 years ago
  mindspore-ci-bot a915cc3bd9 !1225 gpu NoOp optimizer 5 years ago
  wilfChen 23b4b4d106 Gpu NoOp optimizer 6 years ago
  yujianfeng c8d33568f2 Add an new output to FusedMulApplyMomentum 6 years ago
  huanghui c4af71e236 add LarsV2 fission pass 6 years ago
  huanghui d4a82951ed fix confusionmulgrad fusion pass may create a loop 6 years ago
  Margaret_wangrui 137007be88 only call HasNodeAttr for CNodePtr type 6 years ago
  yujianfeng 4a0ddaef59 Support specifying reshape type for batchnorm fused op 6 years ago
  etone-chan 38e0d98eb5 refactor fusion id implement of buffer fusion 6 years ago
  YuJianfeng aa6f808616 Add batch norm bert fission pass 6 years ago
  changzherui b323199dc1 syn code 430 6 years ago
  mindspore-ci-bot ef596f26e9 !802 [control sink]move the opt process to build graph 6 years ago
  chenfei cc54bb565d move opt to build graph 6 years ago
  YuJianfeng dfa66e4d0c Check the empty value tuple when converting it to tuple tensor 6 years ago
  YuJianfeng ce2a13fcda Check topk supported before converting input to attr 6 years ago
  yanzhenxiang2020 691337a6e1 add aicpu ops of Reshape/Flatten/Squeeze/ExpandDims/IsFinite 6 years ago
  mindspore-ci-bot f69a668d98 !350 change tuple output to make tuple 6 years ago
  lianliguang 00e4306518 convert all tuple output to maketuple 6 years ago
  mindspore-ci-bot f98efafa16 !317 [IRFusion] add derelu_fusion pass 6 years ago
  chenfei e017fd8916 share mem of paramter between child graph 6 years ago
  mindspore-ci-bot 9b80c160a0 !276 pynative-add-op-supported 6 years ago
  mindspore-ci-bot 58a70b5f82 !346 getnext parallel optimization part II: Eliminate Memcpy in specify scenario 6 years ago
  lvliang ffe8b5d3ec pynative-add-op-supported 6 years ago
  laiyongqiang 3e05f50f5f getnext_memcpy_elimination 6 years ago
  chenjianping 1286767d0e support building on windows 6 years ago
  huanghui b02e871c1a [IRFusion] add derelu_fusion pass 6 years ago
  YuJianfeng e5c67b9088 Add cnode to equal map when opt matching 6 years ago
  dinghao 37ba21c271 fix ref pass visit graph bug 6 years ago
  Wei Luning 73ba399364 remove ge depend in cpu 6 years ago
  GinFung 468dbc3557 Add matmul biasadd fusion pass 6 years ago
  zhunaipan 930a1fb0a8 initial version 6 years ago