104 Commits (117b35eb49210f076ecb7bae6e7bbd4ba498af2d)

Author SHA1 Message Date
  laiyongqiang 1533435015 replace memcpy_async with tensor move 4 years ago
  dingpeifei 3c9d8cb073 The input and output of batchnorm reverse operator increase pass in ascend platform under the mode of pynitve 4 years ago
  dingpeifei 87e41aaeee IR operators of GPU and CPU are unified as batchnorm 4 years ago
  liubuyu 518818fbef reshape type for 3d nodes 4 years ago
  yuchaojie 6d195f340c add SyncBatchNorm 4 years ago
  mindspore-ci-bot aebe263dce !11895 unify mindir for different backend: the output num of optimizer ops, the backward of concat 4 years ago
  jinyaohui 8022f9a6ed modify pack to stack 4 years ago
  wangnan39@huawei.com cd9173fdfd unify the output num of optimizer ops 4 years ago
  shenghong96 49144fde37 fix UT of test_topk_no_split 4 years ago
  xsmq a8259bae9b disable ut cpp case(test_topk_no_split) 4 years ago
  lilei 9a45c4419c modify batch_normal 4 years ago
  jjfeing 8fb7d11ecb fix topk help 4096 5 years ago
  huanghui 1c6c280da7 fix unsorted_segment_sum_fission pass 5 years ago
  Yi Huaijie d7faa77b5e support int64 shape 5 years ago
  huanghui b7519b7418 unify save_graphs_path 5 years ago
  mindspore-ci-bot c951d42c2c !6728 [Ascend][DynamicShape] Dynamic shape feature 5 years ago
  caifubi d3b978147f Ascend Dynamic Shape 5 years ago
  jjfeing 755863ebae insert memcpy when hccl node 5 years ago
  mindspore-ci-bot 3048240f16 !5508 Add AdamApplyOneWithDecayAssign fusion pass 5 years ago
  fary86 fcbb3e0edc Refactor ms_context implementation 5 years ago
  yujianfeng 4b77f6b53c Add AdamApplyOneWithDecayAssign fusion pass 5 years ago
  yujianfeng e688e1df32 Fix remove internal output for unique device target 5 years ago
  Wei Luning c1c30a44f1 rename param_value -> param_info 5 years ago
  huanghui b8d7f6d77f add UnsortedSegmentSum fission pass 5 years ago
  yujianfeng 8a77751988 Add AdamApplyOneAssign and AdamApplyOneWithDecayAssign fusion pass 5 years ago
  mindspore-ci-bot b045f47428 !3983 Add ReduceMin fission pass 5 years ago
  huanghui 30000fdb52 add ReduceMin fission pass 5 years ago
  liubuyu d81862a916 decoupling core and context 5 years ago
  Wei Luning a05c38bb63 make python Parameter inherit from Tensor 5 years ago
  WilliamLian 0179724dcd spilit unspported transdata to two transdata from special format -> defualt -> default -> special 5 years ago
  yujianfeng 4d18e9ec35 Fix internal multiple outputs check 5 years ago
  huanghui f1563d2d37 insert memcpy async if hccl op cascade 5 years ago
  mindspore-ci-bot 6f8863b65d !3198 synchronize latest Ascend software suite 18 Jul 2020, and merging branches 5 years ago
  yanghaoran 859acc6d2a synchronize latest Ascend software suite 18 Jul 2020, and merging branches 5 years ago
  yujianfeng fa0684d12d Add pack and concat fission pass 5 years ago
  yujianfeng 188d74f15e Remove transdata and cast for internal outputs 5 years ago
  changzherui f4cb445ea8 syn code for 0715 5 years ago
  liubuyu 43c79eb853 mindspore path adjust 5 years ago
  huanghui 3eaf663545 add tensor scatter update fission pass 5 years ago
  yujianfeng 24f6b9d77e Add input2output pass 5 years ago
  gong chen a6dfa281ea Init GraphKernel. 5 years ago
  yujianfeng 7ad877a948 Add Split fission pass 5 years ago
  huanghui 4acb61d59d code review fix for buffer fusion 5 years ago
  huanghui 118496b3ec enhance insert memcpy 5 years ago
  WilliamLian 9808e47663 change checkAicpu to CheckAICPU & add charge Scalar function to charge the input or output is scalar 5 years ago
  huanghui b4c0ed4b36 add signle batchnorm fission pass 5 years ago
  chujinjin 7465abc798 optimize transdata for pynative 5 years ago
  mindspore-ci-bot c51d90d84e !1767 Move LayerNormGrad split pass ahead of kernel select 5 years ago
  yujianfeng e87ac6525e Add batch norm fusion pattern for mix precision 5 years ago
  huanghui cf87218fb7 place layernormgrad split pass before kernel select 5 years ago