271 Commits (4e974fb2bc341e5ce53f33eb5508ffaa534e3956)

Author SHA1 Message Date
  mindspore-ci-bot 414184c184 !5367 Check the parameter's split strategies if it has multiple users 5 years ago
  yao_yf 07117e4dd4 mv ParallelMode to context 5 years ago
  yangzhenzhang fbda03bbcc check parameter split 5 years ago
  mindspore-ci-bot 66d6320b21 !5224 Add test case about loss scale in parallel mode 5 years ago
  yangzhenzhang 6ae5893681 add test cases 5 years ago
  panyifeng 1a54785fe2 remove name arg from gradoperation 5 years ago
  mindspore-ci-bot 7d4f481884 !5017 remove internal interface in wide&deep 5 years ago
  mindspore-ci-bot abe6b82138 !5011 remove global grad ops 5 years ago
  yao_yf a9a8e323b2 remove internal interface in wide&deep 5 years ago
  mindspore-ci-bot fc6eee3bda !5019 raise RuntimeError when set different mode after Initializer created 5 years ago
  panyifeng 637e812347 remove global grad ops 5 years ago
  Yi Huaijie 394be43492 raise RuntimeError when set different mode after Initializer created 5 years ago
  Su Teng e3ae23c939 add parallel attention test 5 years ago
  mindspore-ci-bot 3d06cbf987 !4801 Must set or change parallel mode before any Initializer created 5 years ago
  Yi Huaijie 89a4ebf8a1 parallel mode must be set before create an initializer 5 years ago
  mindspore-ci-bot 9ee144ea40 !4744 [AutoParallel]Support bert 5 years ago
  lichenever 221a801395 auto parallel support bert 5 years ago
  yangzhenzhang cda08f6a52 concat 3 tensors in auto parallel mode 5 years ago
  mindspore-ci-bot 2ae6365d77 !4650 EmbeddingLookup support auto parallel 5 years ago
  yangzhenzhang 6f6a8ae9f0 embedding lookup auto parallel 5 years ago
  Yi Huaijie 0f7ead5f14 parameter slice init test all initializers 5 years ago
  yao_yf cbb4363fa7 remove to_full_tensor and load_inputs in exexute stage 5 years ago
  yangzhenzhang 14c77c9f03 update field split 5 years ago
  mindspore-ci-bot 2db0290c49 !4356 Add validation for field split 5 years ago
  yangzhenzhang 4a0e6ff7fc update field split 5 years ago
  yao_yf e4de26d5bc embeddinglookup wrap 5 years ago
  yangzhenzhang f4bb43bbaf add concat op 5 years ago
  lichenever bfc96de1b9 add dropout distributed op 5 years ago
  simson 3617121ccf revert modification of opt 5 years ago
  Xiaoda Zhang d24a902afe add a new graph operation in autoparallel 5 years ago
  mindspore-ci-bot ab4c43007f !3657 Add parallel operator for StridedSlice 5 years ago
  Ziyan 98e2ee90de fix optimizer parallel problems 5 years ago
  yangzhenzhang 9aa84b3d14 add strided slice op 5 years ago
  mindspore-ci-bot 16079e6356 !3472 [Auto parallel] Cost model for GPU 5 years ago
  mindspore-ci-bot d4165671d9 !3435 Add parallel operator for Tile 5 years ago
  Xiaoda Zhang 9097b36950 add resnet50 testcases for gpu 5 years ago
  lirongzhen1 51796aa624 fix sparse feature bug for auto parallel 5 years ago
  yangzhenzhang 6a6e2bd271 add tile op 5 years ago
  panyifeng 963bd67a60 add sparse api docs 5 years ago
  panyifeng 8a89f003eb fix sparse related issues 5 years ago
  mindspore-ci-bot 684ff4f46b !3160 Rewrite tensor's __bool__ for pynative mode 5 years ago
  simson 5f77fbdd75 Rewrite tensor's __bool__ for pynative mode 5 years ago
  mindspore-ci-bot 7f1ccc5f3b !3311 add sparse feature test cases for auto parallel 5 years ago
  mindspore-ci-bot bc20de741a !3315 restore reshape ut 5 years ago
  yao_yf 1d3a06a3b0 recover reshape ut 5 years ago
  lirongzhen1 5d63c60135 add sparse feature test cases for auto parallel 5 years ago
  wangnan39@huawei.com 082433183d uniform learning_rate behavior of optimizers 5 years ago
  anzhengqi 008b91b2a1 inject epoch ctrl op in the execution tree and send eos at the end of epoch 5 years ago
  liuxiao93 75881e5f2f check input of BatchNorm is 4D. 5 years ago
  wangnan39@huawei.com 86889c59cb optimizer adapt IndexedSlices 5 years ago