144 Commits (master)

Author SHA1 Message Date
  chenfei 3872e29c3b add ut for tuple trans 4 years ago
  huanghui 1077601e26 add cpp ut for jit_config 4 years ago
  wangrao124 9dfc703f7a change sparsetensor to cootensor 4 years ago
  huanghui f3ebb1a0e1 adapt some cpp ut for unused nodes eliminate 4 years ago
  yuchaojie 343f7dbe06 Add axis check and shape black list for ConfusionSoftmaxGradRule pass 4 years ago
  Margaret_wangrui f9a384456a Add the check of function return None. 4 years ago
  zhoufeng ecae690a19 Revert "fix same node is used by two comm op" 4 years ago
  Zhang Qinghua a137fa1d0b Optimize the Executors routines. 4 years ago
  i-robot 32281f84e7 !19000 update LayerNormGrad split pass to V2 4 years ago
  yuchaojie 1d1490df0b xupdate LayerNormGrad split pass to V2 4 years ago
  He Wei c9ecb27db8 Change monad.py as internal usage only 4 years ago
  ms_yan 36a8886ca2 Revert "[feat] [assistant] [I3T96T] add new Dataset operator CMUARCTICDataset" 4 years ago
  djc 4e6f7dc97d [feat] [assistant] [I3T96X] add new Dataset operator LibriSpeechDataset 4 years ago
  zhoufeng 03a56f2bb0 alltoall exception handle 4 years ago
  hwjiaorui e498c96a20 tbe json creator UT 4 years ago
  zhoufeng 2143241092 change neighbor exchange to all to all 4 years ago
  laiyongqiang 1533435015 replace memcpy_async with tensor move 4 years ago
  buxue d830a2b55a optimize exception mode when use undefined name in if for and while 5 years ago
  dingpeifei 87e41aaeee IR operators of GPU and CPU are unified as batchnorm 5 years ago
  chenfei fc335daa30 get all j node and then expand them 5 years ago
  l00591931 bbdb050fc7 Change switch to Switch 5 years ago
  l00591931 680324f225 Change make tuple in core.ops 5 years ago
  yuchaojie 6d195f340c add SyncBatchNorm 5 years ago
  He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad 5 years ago
  mindspore-ci-bot aebe263dce !11895 unify mindir for different backend: the output num of optimizer ops, the backward of concat 5 years ago
  jinyaohui 8022f9a6ed modify pack to stack 5 years ago
  wangnan39@huawei.com cd9173fdfd unify the output num of optimizer ops 5 years ago
  mindspore-ci-bot b189f177bb Change tuple_getitem to TupleGetItem and some other ops, merge from r1.1 to master 5 years ago
  l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master 5 years ago
  yuchaojie b51b3a6764 update Pool's attr kernel_size, pad_mode 5 years ago
  lilei 9a45c4419c modify batch_normal 5 years ago
  chenhaozhe b3add83bf0 support const input in graph_ir convertor, add value inference in Concat 5 years ago
  Hoai Linh Tran 46f07efc31 Fix AdjustAllReduceMulAdd pass 5 years ago
  yujianfeng 4b77f6b53c Add AdamApplyOneWithDecayAssign fusion pass 5 years ago
  huanghui b8d7f6d77f add UnsortedSegmentSum fission pass 5 years ago
  yujianfeng 8a77751988 Add AdamApplyOneAssign and AdamApplyOneWithDecayAssign fusion pass 5 years ago
  panyifeng 34e50e5d6e fix cell bprop 5 years ago
  huanghui 30000fdb52 add ReduceMin fission pass 5 years ago
  laiyongqiang 2458431750 not eliminate memcpy when nexe node is graph output 5 years ago
  huanghui f1563d2d37 insert memcpy async if hccl op cascade 5 years ago
  mindspore-ci-bot 6f8863b65d !3198 synchronize latest Ascend software suite 18 Jul 2020, and merging branches 5 years ago
  yanghaoran 859acc6d2a synchronize latest Ascend software suite 18 Jul 2020, and merging branches 5 years ago
  yujianfeng fa0684d12d Add pack and concat fission pass 5 years ago
  mindspore-ci-bot 4a19e6b8cb !3114 add coo_tensor 5 years ago
  panyifeng 5a10383cc3 add coo_tensor 5 years ago
  yujianfeng 188d74f15e Remove transdata and cast for internal outputs 5 years ago
  changzherui f4cb445ea8 syn code for 0715 5 years ago
  mindspore-ci-bot a661545d49 !265 Synchronize Ascend software suite 07 Jul 2020 5 years ago
  jiangjinsheng 7f9bbfd338 add Conv1d ops 5 years ago
  mindspore-ci-bot 69ce4bf2b2 !2970 Del mindspore/model_zoo 5 years ago