237 Commits (b6264c297ea06f5fa254b1670546ec0acc264dcb)

Author SHA1 Message Date
  mindspore-ci-bot 4e741f8aa6 !16701 gpu matmul and biasadd fusion 4 years ago
  changzherui d9e2da299d Revert "!16599 c++ infer for conv2dbackpropfilter and conv2dbackpropinput" 4 years ago
  wangnan39@huawei.com 937acff29b dropout support dynamic shape 4 years ago
  changzherui ea04c4304f Revert "!16693 add Conv2dTranspose" 4 years ago
  changzherui 2c41833cfa add Conv2dTranspose 4 years ago
  wilfChen b2242d13c4 gpu matmul biasadd fusion 4 years ago
  caifubi 0928682655 Profiling support Control Flow 4 years ago
  l00591931 befc7a9dea Add primal_attr to link between forward and backward node attr 5 years ago
  zuochuanyong e7ea343738 add format transform pass on cpu 5 years ago
  jjfeing 88c92cd263 clear parameter when param_info clone 5 years ago
  mindspore-ci-bot 78469f6083 !15356 Support mem reuse in control flow and multi-call subgraphs 5 years ago
  liangzelang 052a803c63 adapt to mem reuse 5 years ago
  limingqi107 c937a22bda add the actor link by auto monad 5 years ago
  luopengting 727dc08bfa fix TAINTED_SCALAR and capitalize constants 5 years ago
  liubuyu 40f34b0d90 3d graph reconstruct 5 years ago
  mindspore-ci-bot 7f4994af7c !14186 Support while bprop 5 years ago
  liangzelang ba65fb9f3c Support non-tail recursive graphs 5 years ago
  lingyunli63 4b966ed40d support matmul on D 5 years ago
  liuxiao93 723bbac438 revert nn.BatchNorm3d. 5 years ago
  mindspore-ci-bot efe95ebbce !13724 optimize execute order for commops 5 years ago
  kswang dc543f3f1e optimize execute order for commops 5 years ago
  mindspore-ci-bot cf5eaf8590 !13050 Don't insert UpdateState for HyperMap func graph call, move auto monad eliminator out from CSE, and eliminate auto monad nodes for output node. 5 years ago
  Zhang Qinghua e853df4ecd Don't insert UpdateState for HyperMap func graph call. 5 years ago
  dingpeifei 87e41aaeee IR operators of GPU and CPU are unified as batchnorm 5 years ago
  mindspore-ci-bot 2013e3f370 !13216 If data_format is NCDHW, BatchNorm to BatchNorm3D. 5 years ago
  mindspore-ci-bot 654771df13 !13080 fix embeddinglookup infer 5 years ago
  liuxiao93 d44c706baf batchnorm to batchnorm3d. 5 years ago
  fangzehua dadbd54f0e add embedding infer 5 years ago
  l00591931 bbdb050fc7 Change switch to Switch 5 years ago
  mindspore-ci-bot 54c37bcd61 !12947 Add MaxPool3D,MaxPool3DGrad,MaxPool3DGradGrad ops for Ascend. 5 years ago
  mindspore-ci-bot c69142fdc1 !12968 update reshape type for 3d nodes 5 years ago
  mindspore-ci-bot 54fc5e0d2b !12234 [GraphKernel] Support pipeline optimization for parallel fusion. 5 years ago
  liuxiao93 35f6ba9011 Add MaxPool3D,MaxPool3DGrad,MaxPool3DGradGrad ops for Ascend. 5 years ago
  liubuyu 518818fbef reshape type for 3d nodes 5 years ago
  mindspore-ci-bot e00d8cd1d6 !13020 Not save InitDatasetQueue and GetNext op in PyNative Mode 5 years ago
  tronzhang 7252ffb66b pipeline optimization for parallel fusion 5 years ago
  tanghuikang 6102202abd Not save InitDatasetQueue and GetNext op in PyNative Mode 5 years ago
  mindspore-ci-bot fa4c19f938 !13002 3d format bug fix 5 years ago
  liubuyu 62aa7d0e87 bug fix for 3d format 5 years ago
  He Wei 891fd7df92 [auto-monad] Refactor ascend_auto_monad 5 years ago
  Zhang Qinghua df866f7248 Add TopoSort Rhs First attribute for special CNode, such as Depend CNode with isolated nodes. 5 years ago
  mindspore-ci-bot 423dcfc917 !12836 Change return in core_ops 5 years ago
  mindspore-ci-bot 2f312dac66 !12091 Performance optimization for PyNative AllReduce 5 years ago
  mindspore-ci-bot 4365c332e6 !12813 unify AvgPoolGrad's MindIR 5 years ago
  mindspore-ci-bot c529cfa427 !12754 auto tune step one construct json 5 years ago
  l00591931 cf7c5840e3 Change return 5 years ago
  yuchaojie d2cb3aa1c2 unify AvgPoolGrad 5 years ago
  caifubi 171b468bb3 PyNative AllReduce Bucket 5 years ago
  liubuyu 2d97244741 auto tune stage one: construct json 5 years ago
  simson c29d8f66d8 fix precision error after cache modification 5 years ago