589 Commits (7df661fbafec45ff00a034f331ccca8035ced0e0)

Author SHA1 Message Date
  dingpeifei 87e41aaeee IR operators of GPU and CPU are unified as batchnorm 5 years ago
  ms_yan 92e86804e1 init add acltdt handle create and destory 5 years ago
  mindspore-ci-bot 2013e3f370 !13216 If data_format is NCDHW, BatchNorm to BatchNorm3D. 5 years ago
  mindspore-ci-bot 654771df13 !13080 fix embeddinglookup infer 5 years ago
  liuxiao93 d44c706baf batchnorm to batchnorm3d. 5 years ago
  fangzehua dadbd54f0e add embedding infer 5 years ago
  l00591931 bbdb050fc7 Change switch to Switch 5 years ago
  mindspore-ci-bot 54c37bcd61 !12947 Add MaxPool3D,MaxPool3DGrad,MaxPool3DGradGrad ops for Ascend. 5 years ago
  mindspore-ci-bot c69142fdc1 !12968 update reshape type for 3d nodes 5 years ago
  mindspore-ci-bot 54fc5e0d2b !12234 [GraphKernel] Support pipeline optimization for parallel fusion. 5 years ago
  liuxiao93 35f6ba9011 Add MaxPool3D,MaxPool3DGrad,MaxPool3DGradGrad ops for Ascend. 5 years ago
  liubuyu 518818fbef reshape type for 3d nodes 5 years ago
  mindspore-ci-bot e00d8cd1d6 !13020 Not save InitDatasetQueue and GetNext op in PyNative Mode 5 years ago
  tronzhang 7252ffb66b pipeline optimization for parallel fusion 5 years ago
  tanghuikang 6102202abd Not save InitDatasetQueue and GetNext op in PyNative Mode 5 years ago
  mindspore-ci-bot fa4c19f938 !13002 3d format bug fix 5 years ago
  liubuyu 62aa7d0e87 bug fix for 3d format 5 years ago
  He Wei 891fd7df92 [auto-monad] Refactor ascend_auto_monad 5 years ago
  Zhang Qinghua df866f7248 Add TopoSort Rhs First attribute for special CNode, such as Depend CNode with isolated nodes. 5 years ago
  mindspore-ci-bot 423dcfc917 !12836 Change return in core_ops 5 years ago
  mindspore-ci-bot 2f312dac66 !12091 Performance optimization for PyNative AllReduce 5 years ago
  mindspore-ci-bot 4365c332e6 !12813 unify AvgPoolGrad's MindIR 5 years ago
  mindspore-ci-bot c529cfa427 !12754 auto tune step one construct json 5 years ago
  l00591931 cf7c5840e3 Change return 5 years ago
  yuchaojie d2cb3aa1c2 unify AvgPoolGrad 5 years ago
  caifubi 171b468bb3 PyNative AllReduce Bucket 5 years ago
  liubuyu 2d97244741 auto tune stage one: construct json 5 years ago
  simson c29d8f66d8 fix precision error after cache modification 5 years ago
  yepei6 3347aa1e02 update tensor_type 5 years ago
  zhupuxu b15d182cd2 fix bug for dynamic_shape_depends 5 years ago
  buxue 47dd17a325 support convert ValueDict to py dict 5 years ago
  mindspore-ci-bot a063d7633d !12241 [auto-monad] Support side-effects by auto-monad 5 years ago
  He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad 5 years ago
  liu_xiao_93 fabc25538e Add BCEWithLogitsLoss 5 years ago
  gongxiaoqing 7f538b51e7 回退 'Pull Request !11074 : replace tdt with acltdt interface' 5 years ago
  mindspore-ci-bot c2582dcab9 !11074 replace tdt with acltdt interface 5 years ago
  jjfeing 502be04491 upgrade 0204 5 years ago
  yepei6 1b4633f31f update acltdt api 5 years ago
  ms_yan 293f81128d init add acltdt handle create and destory 5 years ago
  mindspore-ci-bot a24ff36d9c !11777 stitch fusion 5 years ago
  l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master 5 years ago
  r1chardf1d0 9d6392c5c5 stitch info 5 years ago
  mindspore-ci-bot ce89cc5e8b !11761 Change GatherV2 to Gather (merge from r1.1 to master) 5 years ago
  liuxiao93 68e9be725e split optimizer 5 years ago
  mindspore-ci-bot 9fa0499fa0 Change GatherV2 to Gather r1.1 to master 5 years ago
  lizhenyu f17534af08 ps cache support sparse 5 years ago
  mindspore-ci-bot ca675c0521 !11665 [GraphKernel] Add parallel fusion support to master. 5 years ago
  TFBunny 6cd7dc42e9 add testcases and dynamic shape to reduce ops 5 years ago
  tronzhang d078cbfa99 support parallel fusion 5 years ago
  yujianfeng 266e960acb Not do cse for the nodes set recomputed before recompute pass 5 years ago