51 Commits (c94dea6a512eddb6cbe8b591268d82d7b9aa3209)

Author SHA1 Message Date
  He Wei 43e0967024 Decouple ir::Tensor class from python 5 years ago
  gong chen a6dfa281ea Init GraphKernel. 5 years ago
  yujianfeng 7ad877a948 Add Split fission pass 5 years ago
  huanghui 8463731bcc make those AdamXX and LambXX fusion pass not work for unexpect data type 5 years ago
  limingqi107 b83f90a8d8 gpu optimize Nop node 5 years ago
  mindspore-ci-bot 971f10d222 !1790 remove transdata only connected with control depend 5 years ago
  WilliamLian b86016a26f remove the useless transdata and cast of control depend node 5 years ago
  huanghui 4acb61d59d code review fix for buffer fusion 5 years ago
  huanghui 88eec2b894 fix single-batchnorm-fission && softmax-grad-ext-fusion pass 5 years ago
  huanghui b4c0ed4b36 add signle batchnorm fission pass 5 years ago
  leopz a9e30a96db separate py from debug and utils 5 years ago
  mindspore-ci-bot 04398cf88e !1433 add tensor_minnie and separate py from ir 5 years ago
  leopz 4508134ceb add tensor_minnie and separate py from ir 5 years ago
  huanghui 1d65ae598a extract const_to_attr_strided_slice_grad pass 5 years ago
  zhaozhenlong 30b93ecbf8 use reshape as flatten grad 5 years ago
  mindspore-ci-bot daccfef738 !1361 Refactor multiple output pass 5 years ago
  mindspore-ci-bot 2fef359c4d !1396 Cancel NoOp optimizer on gpu backend until memory reuse ready 5 years ago
  wilfChen 56d751b5d2 Cancel NoOp optimizer in GPU until memory reuse ready 5 years ago
  huanghui f16ff539ba refactor multiple patterns pass 5 years ago
  etone-chan 391a1fa6ac add conv2dbackpropinput eltwise fusion pass 5 years ago
  huanghui 5a68eba585 Refactor LambNextMVWithDecayRule fusion pass 5 years ago
  mindspore-ci-bot a915cc3bd9 !1225 gpu NoOp optimizer 5 years ago
  wilfChen 23b4b4d106 Gpu NoOp optimizer 5 years ago
  yujianfeng c8d33568f2 Add an new output to FusedMulApplyMomentum 5 years ago
  huanghui c4af71e236 add LarsV2 fission pass 5 years ago
  huanghui d4a82951ed fix confusionmulgrad fusion pass may create a loop 5 years ago
  Margaret_wangrui 137007be88 only call HasNodeAttr for CNodePtr type 5 years ago
  yujianfeng 4a0ddaef59 Support specifying reshape type for batchnorm fused op 5 years ago
  etone-chan 38e0d98eb5 refactor fusion id implement of buffer fusion 5 years ago
  YuJianfeng aa6f808616 Add batch norm bert fission pass 5 years ago
  changzherui b323199dc1 syn code 430 5 years ago
  mindspore-ci-bot ef596f26e9 !802 [control sink]move the opt process to build graph 5 years ago
  chenfei cc54bb565d move opt to build graph 6 years ago
  YuJianfeng dfa66e4d0c Check the empty value tuple when converting it to tuple tensor 5 years ago
  YuJianfeng ce2a13fcda Check topk supported before converting input to attr 5 years ago
  yanzhenxiang2020 691337a6e1 add aicpu ops of Reshape/Flatten/Squeeze/ExpandDims/IsFinite 5 years ago
  mindspore-ci-bot f69a668d98 !350 change tuple output to make tuple 6 years ago
  lianliguang 00e4306518 convert all tuple output to maketuple 6 years ago
  mindspore-ci-bot f98efafa16 !317 [IRFusion] add derelu_fusion pass 6 years ago
  chenfei e017fd8916 share mem of paramter between child graph 6 years ago
  mindspore-ci-bot 9b80c160a0 !276 pynative-add-op-supported 6 years ago
  mindspore-ci-bot 58a70b5f82 !346 getnext parallel optimization part II: Eliminate Memcpy in specify scenario 6 years ago
  lvliang ffe8b5d3ec pynative-add-op-supported 6 years ago
  laiyongqiang 3e05f50f5f getnext_memcpy_elimination 6 years ago
  chenjianping 1286767d0e support building on windows 6 years ago
  huanghui b02e871c1a [IRFusion] add derelu_fusion pass 6 years ago
  YuJianfeng e5c67b9088 Add cnode to equal map when opt matching 6 years ago
  dinghao 37ba21c271 fix ref pass visit graph bug 6 years ago
  Wei Luning 73ba399364 remove ge depend in cpu 6 years ago
  GinFung 468dbc3557 Add matmul biasadd fusion pass 6 years ago