141 Commits (a17f76dd1d83728cdc8ffcb52de694dfd3fcf12e)

Author SHA1 Message Date
  liuxiao93 584e241e29 Adapt ops LinSpace for Ascend. 5 years ago
  lizhenyu 094f0b2a07 bugfix:fused batch norm op's input channel nums should be a multiple of 4 5 years ago
  fangzehua 69ce58425d fix reshape dynamic and emb 5 years ago
  LianLiguang bb6148661f change mixedprecision of pynative 5 years ago
  liuxiao93 d471ac491e Adapt DynamicGRUV2 forward for Ascend new backend. 5 years ago
  jjfeing 3feffc7d62 fix ubfusion bug 5 years ago
  mindspore-ci-bot a5b0d13141 !8079 support GNMT net fix dynamic rnn grad fission pass 5 years ago
  liubuyu 662976a75d dynamic rnn fission pass v2 5 years ago
  liuxiao93 45d343257b Add DynamicGRU. 5 years ago
  hwjiaorui 3698b9fd54 register proximal adagrad ds 5 years ago
  VectorSL 509b25ef1e gpu nhwc 5 years ago
  kswang 74c7bdd471 fix segmentfault with fused sparse ftrl 5 years ago
  kswang ece27f313e enable async run 5 years ago
  caifubi d3b978147f Ascend Dynamic Shape 5 years ago
  mindspore-ci-bot 21c5607fca !6971 cudnn inplace optimizer 5 years ago
  wilfChen b420b6cda7 cudnn inplace optimizer 5 years ago
  liubuyu 8af3250477 support dynamic_rnn and dynamic_rnn_grad op 5 years ago
  mindspore-ci-bot 7b3873559f !5883 support for frac_zn_lstm 5 years ago
  dayschan 37a48f6aac GraphKernel supports GPU 5 years ago
  liubuyu 23a298ca81 support new format frac_zn_lstm 5 years ago
  wuxuejian bd527a331d update aicpu proto and update module: graphengine 5 years ago
  wilfChen 13dd31f56c reorder fused optimizer 5 years ago
  mindspore-ci-bot 1944b8e53b !5612 Resnet50 pattern Fusion 5 years ago
  wilfChen 5316061fa3 gpu resnet50 fusion 5 years ago
  yujianfeng 4b77f6b53c Add AdamApplyOneWithDecayAssign fusion pass 5 years ago
  WilliamLian 097f53bed9 add attr for transdata node 5 years ago
  limingqi107 5b76e8f3d7 gpu add format transform pass 5 years ago
  lizhenyu 7ddddc41a9 add FusedBatchNoramEx gpu kernel 5 years ago
  gukecai 66e7b02b4b independent stream parallel 5 years ago
  mindspore-ci-bot 3fb58fcbe4 !4585 add gpu nccl broadcast 5 years ago
  gukecai 6362e954df Revert "independent stream parallel" 5 years ago
  baihuawei b9ebd9c280 add gpu nccl broadcast 5 years ago
  wuyongkang 78611b5d5b fix static check problems 5 years ago
  mindspore-ci-bot 162a356fc8 !4345 new ctrl for parallel and iteration num 5 years ago
  wenchunjiang b24943d496 adapte to remove inline 5 years ago
  huanghui b8d7f6d77f add UnsortedSegmentSum fission pass 5 years ago
  gukecai adb6ff6c78 independent stream parallel 5 years ago
  yujianfeng 8a77751988 Add AdamApplyOneAssign and AdamApplyOneWithDecayAssign fusion pass 5 years ago
  zhoufeng 2f5cbfc26f graph compile performance optimize 5 years ago
  zhoufeng ca7154a548 graph compile performance optimization 5 years ago
  yujianfeng 47ab812edb Insert concat for AllGather outputs 5 years ago
  ZPaC 61551b85d8 incremental feature for ps 5 years ago
  huanghui b25e114840 add op mapping attr for those pass worked in LeNet 5 years ago
  mindspore-ci-bot bae2f964e5 !3213 Unified code style 5 years ago
  liubuyu 76dc80e7b7 Unified code style 5 years ago
  mindspore-ci-bot 6f8863b65d !3198 synchronize latest Ascend software suite 18 Jul 2020, and merging branches 5 years ago
  yujianfeng fa0684d12d Add pack and concat fission pass 5 years ago
  changzherui f4cb445ea8 syn code for 0715 5 years ago
  zhoufeng 439d6d618f Control flow not split graph 5 years ago
  ZPaC f8c7ae7639 Add front end expressions for PS kernels. 5 years ago