558 Commits (b56fc0c2af8a3dc8729237b5e1e6e4e4e5d45dfa)

Author SHA1 Message Date
  mindspore-ci-bot a063d7633d !12241 [auto-monad] Support side-effects by auto-monad 5 years ago
  He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad 5 years ago
  liu_xiao_93 fabc25538e Add BCEWithLogitsLoss 5 years ago
  gongxiaoqing 7f538b51e7 回退 'Pull Request !11074 : replace tdt with acltdt interface' 5 years ago
  mindspore-ci-bot c2582dcab9 !11074 replace tdt with acltdt interface 5 years ago
  jjfeing 502be04491 upgrade 0204 5 years ago
  yepei6 1b4633f31f update acltdt api 5 years ago
  ms_yan 293f81128d init add acltdt handle create and destory 5 years ago
  mindspore-ci-bot a24ff36d9c !11777 stitch fusion 5 years ago
  l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master 5 years ago
  r1chardf1d0 9d6392c5c5 stitch info 5 years ago
  mindspore-ci-bot ce89cc5e8b !11761 Change GatherV2 to Gather (merge from r1.1 to master) 5 years ago
  liuxiao93 68e9be725e split optimizer 5 years ago
  mindspore-ci-bot 9fa0499fa0 Change GatherV2 to Gather r1.1 to master 5 years ago
  lizhenyu f17534af08 ps cache support sparse 5 years ago
  mindspore-ci-bot ca675c0521 !11665 [GraphKernel] Add parallel fusion support to master. 5 years ago
  TFBunny 6cd7dc42e9 add testcases and dynamic shape to reduce ops 5 years ago
  tronzhang d078cbfa99 support parallel fusion 5 years ago
  yujianfeng 266e960acb Not do cse for the nodes set recomputed before recompute pass 5 years ago
  mindspore-ci-bot 910772cea8 !11309 Add Hierarchical Occlusion Counterfactual 5 years ago
  xsmq 73b7154e55 fix cmakelint error 5 years ago
  mindspore-ci-bot f8f6421459 !10968 Add dynamic shape support for the operator Concat 5 years ago
  weiyang 4029b411c9 for switch layer 5 years ago
  unknown d621b9d4ea Add HOC modules and support uncertainty in runner 5 years ago
  hedongdong 8241dfa443 Add dynamic shape support for the operator Concat 5 years ago
  yangzhenzhang 7303c3d3b8 add group ckpt 5 years ago
  mindspore-ci-bot 2ea8527de3 !11314 add cache embedding for wide&deep model 5 years ago
  fangzehua f97e19f23f add cache pass 5 years ago
  yuchaojie 1932d87a26 update some op's attr name 5 years ago
  yuchaojie b51b3a6764 update Pool's attr kernel_size, pad_mode 5 years ago
  zhouyuanshen 26f6daa850 add new op instancenorm2d 5 years ago
  mindspore-ci-bot 92a85d1061 !11075 dynamic op re primitive when infer 5 years ago
  liubuyu 39cc9e70cd dynamic op re primitive when infer 5 years ago
  mindspore-ci-bot e2f344f74a !10086 fix shape of CTCGreedyDecoder 5 years ago
  yanzhenxiang2020 b8b608f672 fix shape of CTCGreedyDecoder 5 years ago
  buxue 6d395c1d3f keep consistent in Graph mode and PyNative mode for 'isinstance' 5 years ago
  liubuyu 119c7010a4 insert reformat op 5 years ago
  mindspore-ci-bot 6d51fc558f !10391 enable loop sink when no getnext in execution orders 5 years ago
  laiyongqiang d417dddb24 enable loop sink when no getnext in execution orders 5 years ago
  fangzehua 4da4c0fc55 add dynamic assign, pad_and_shift kernel 5 years ago
  yanghaoran b8345d03b6 Synchronize latest Ascend software 18 Dec 2020, with profiler fixes 5 years ago
  lizhenyu 4269dcece5 ps cache support save checkpoint 5 years ago
  mindspore-ci-bot ffe61081d3 !10189 fix shape type error when dynamic_kernel shape type is compute_depend 5 years ago
  mindspore-ci-bot 88155de042 !10159 momentum weight fusion 5 years ago
  wilfChen 09e10e18bb momentum weightdecay fusion 5 years ago
  liubuyu 4d75d7b992 fix shape type error 5 years ago
  jinyaohui 14abdd01be modify tsd api 5 years ago
  HuangBingjian 311e7be605 fix scalar tensor shape=[] 5 years ago
  mindspore-ci-bot d8a64b4ac4 !9796 Add SpaceToDepth fission pass to fix bug when data type is float16. 5 years ago
  liuxiao93 2bbd97d334 Add SpaceToDepth fission pass. 5 years ago