186 Commits (b56fc0c2af8a3dc8729237b5e1e6e4e4e5d45dfa)

Author SHA1 Message Date
  mindspore-ci-bot a063d7633d !12241 [auto-monad] Support side-effects by auto-monad 5 years ago
  He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad 5 years ago
  liu_xiao_93 fabc25538e Add BCEWithLogitsLoss 5 years ago
  mindspore-ci-bot a24ff36d9c !11777 stitch fusion 5 years ago
  l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master 5 years ago
  r1chardf1d0 9d6392c5c5 stitch info 5 years ago
  mindspore-ci-bot ce89cc5e8b !11761 Change GatherV2 to Gather (merge from r1.1 to master) 5 years ago
  liuxiao93 68e9be725e split optimizer 5 years ago
  mindspore-ci-bot 9fa0499fa0 Change GatherV2 to Gather r1.1 to master 5 years ago
  lizhenyu f17534af08 ps cache support sparse 5 years ago
  mindspore-ci-bot ca675c0521 !11665 [GraphKernel] Add parallel fusion support to master. 5 years ago
  TFBunny 6cd7dc42e9 add testcases and dynamic shape to reduce ops 5 years ago
  tronzhang d078cbfa99 support parallel fusion 5 years ago
  yujianfeng 266e960acb Not do cse for the nodes set recomputed before recompute pass 5 years ago
  mindspore-ci-bot f8f6421459 !10968 Add dynamic shape support for the operator Concat 5 years ago
  weiyang 4029b411c9 for switch layer 5 years ago
  hedongdong 8241dfa443 Add dynamic shape support for the operator Concat 5 years ago
  mindspore-ci-bot 2ea8527de3 !11314 add cache embedding for wide&deep model 5 years ago
  fangzehua f97e19f23f add cache pass 5 years ago
  yuchaojie 1932d87a26 update some op's attr name 5 years ago
  yuchaojie b51b3a6764 update Pool's attr kernel_size, pad_mode 5 years ago
  zhouyuanshen 26f6daa850 add new op instancenorm2d 5 years ago
  mindspore-ci-bot 92a85d1061 !11075 dynamic op re primitive when infer 5 years ago
  liubuyu 39cc9e70cd dynamic op re primitive when infer 5 years ago
  yanzhenxiang2020 b8b608f672 fix shape of CTCGreedyDecoder 5 years ago
  liubuyu 119c7010a4 insert reformat op 5 years ago
  mindspore-ci-bot 6d51fc558f !10391 enable loop sink when no getnext in execution orders 5 years ago
  laiyongqiang d417dddb24 enable loop sink when no getnext in execution orders 5 years ago
  fangzehua 4da4c0fc55 add dynamic assign, pad_and_shift kernel 5 years ago
  lizhenyu 4269dcece5 ps cache support save checkpoint 5 years ago
  mindspore-ci-bot ffe61081d3 !10189 fix shape type error when dynamic_kernel shape type is compute_depend 5 years ago
  wilfChen 09e10e18bb momentum weightdecay fusion 5 years ago
  liubuyu 4d75d7b992 fix shape type error 5 years ago
  mindspore-ci-bot d8a64b4ac4 !9796 Add SpaceToDepth fission pass to fix bug when data type is float16. 5 years ago
  liuxiao93 2bbd97d334 Add SpaceToDepth fission pass. 5 years ago
  jjfeing 1984cf8e20 unify mindir 5 years ago
  mindspore-ci-bot be4e91339f !9661 gpu relu optimize 5 years ago
  wilfChen c1d3bd2160 relu optimize 5 years ago
  zhouyuanshen e9aca01620 add support to reduceAny and reduceAll on gpu 5 years ago
  mindspore-ci-bot 32444fbbd5 !8870 hccl send receive op 5 years ago
  liubuyu e3fa342d72 support 3d format 5 years ago
  baihuawei 7d09dff880 add hccl send recv 5 years ago
  TFbunny 5e19a642f9 fix and add testcase for dynamic shape scatteradd/update transpose 5 years ago
  mindspore-ci-bot c78683a411 !8981 gatherv2 pad optimizer in dynamic shape scene 5 years ago
  yao_yf 444cb99b40 gather_v2 pad optimizer pass 5 years ago
  liuxiao93 584e241e29 Adapt ops LinSpace for Ascend. 5 years ago
  lizhenyu 094f0b2a07 bugfix:fused batch norm op's input channel nums should be a multiple of 4 5 years ago
  fangzehua 69ce58425d fix reshape dynamic and emb 5 years ago
  LianLiguang bb6148661f change mixedprecision of pynative 5 years ago
  liuxiao93 d471ac491e Adapt DynamicGRUV2 forward for Ascend new backend. 5 years ago