166 Commits (60feffad207316a8e7347e1667331a38fa532a02)

Author SHA1 Message Date
  yuchaojie b51b3a6764 update Pool's attr kernel_size, pad_mode 4 years ago
  zhouyuanshen 26f6daa850 add new op instancenorm2d 4 years ago
  mindspore-ci-bot 92a85d1061 !11075 dynamic op re primitive when infer 4 years ago
  liubuyu 39cc9e70cd dynamic op re primitive when infer 4 years ago
  yanzhenxiang2020 b8b608f672 fix shape of CTCGreedyDecoder 5 years ago
  liubuyu 119c7010a4 insert reformat op 5 years ago
  mindspore-ci-bot 6d51fc558f !10391 enable loop sink when no getnext in execution orders 5 years ago
  laiyongqiang d417dddb24 enable loop sink when no getnext in execution orders 5 years ago
  fangzehua 4da4c0fc55 add dynamic assign, pad_and_shift kernel 5 years ago
  lizhenyu 4269dcece5 ps cache support save checkpoint 5 years ago
  mindspore-ci-bot ffe61081d3 !10189 fix shape type error when dynamic_kernel shape type is compute_depend 5 years ago
  wilfChen 09e10e18bb momentum weightdecay fusion 5 years ago
  liubuyu 4d75d7b992 fix shape type error 5 years ago
  mindspore-ci-bot d8a64b4ac4 !9796 Add SpaceToDepth fission pass to fix bug when data type is float16. 5 years ago
  liuxiao93 2bbd97d334 Add SpaceToDepth fission pass. 5 years ago
  jjfeing 1984cf8e20 unify mindir 5 years ago
  mindspore-ci-bot be4e91339f !9661 gpu relu optimize 5 years ago
  wilfChen c1d3bd2160 relu optimize 5 years ago
  zhouyuanshen e9aca01620 add support to reduceAny and reduceAll on gpu 5 years ago
  mindspore-ci-bot 32444fbbd5 !8870 hccl send receive op 5 years ago
  liubuyu e3fa342d72 support 3d format 5 years ago
  baihuawei 7d09dff880 add hccl send recv 5 years ago
  TFbunny 5e19a642f9 fix and add testcase for dynamic shape scatteradd/update transpose 5 years ago
  mindspore-ci-bot c78683a411 !8981 gatherv2 pad optimizer in dynamic shape scene 5 years ago
  yao_yf 444cb99b40 gather_v2 pad optimizer pass 5 years ago
  liuxiao93 584e241e29 Adapt ops LinSpace for Ascend. 5 years ago
  lizhenyu 094f0b2a07 bugfix:fused batch norm op's input channel nums should be a multiple of 4 5 years ago
  fangzehua 69ce58425d fix reshape dynamic and emb 5 years ago
  LianLiguang bb6148661f change mixedprecision of pynative 5 years ago
  liuxiao93 d471ac491e Adapt DynamicGRUV2 forward for Ascend new backend. 5 years ago
  jjfeing 3feffc7d62 fix ubfusion bug 5 years ago
  mindspore-ci-bot a5b0d13141 !8079 support GNMT net fix dynamic rnn grad fission pass 5 years ago
  liubuyu 662976a75d dynamic rnn fission pass v2 5 years ago
  liuxiao93 45d343257b Add DynamicGRU. 5 years ago
  hwjiaorui 3698b9fd54 register proximal adagrad ds 5 years ago
  VectorSL 509b25ef1e gpu nhwc 5 years ago
  kswang 74c7bdd471 fix segmentfault with fused sparse ftrl 5 years ago
  kswang ece27f313e enable async run 5 years ago
  caifubi d3b978147f Ascend Dynamic Shape 5 years ago
  mindspore-ci-bot 21c5607fca !6971 cudnn inplace optimizer 5 years ago
  wilfChen b420b6cda7 cudnn inplace optimizer 5 years ago
  liubuyu 8af3250477 support dynamic_rnn and dynamic_rnn_grad op 5 years ago
  mindspore-ci-bot 7b3873559f !5883 support for frac_zn_lstm 5 years ago
  dayschan 37a48f6aac GraphKernel supports GPU 5 years ago
  liubuyu 23a298ca81 support new format frac_zn_lstm 5 years ago
  wuxuejian bd527a331d update aicpu proto and update module: graphengine 5 years ago
  wilfChen 13dd31f56c reorder fused optimizer 5 years ago
  mindspore-ci-bot 1944b8e53b !5612 Resnet50 pattern Fusion 5 years ago
  wilfChen 5316061fa3 gpu resnet50 fusion 5 years ago
  yujianfeng 4b77f6b53c Add AdamApplyOneWithDecayAssign fusion pass 5 years ago