309 Commits (b77c3f093dec015b7fc138ced677f1f7543bebcb)

Author SHA1 Message Date
  yangzhenzhang c2ca2232c5 add select op 4 years ago
  mindspore-ci-bot 1c9d3c0aa0 !15353 add parallel operator for scatter update 4 years ago
  mindspore-ci-bot 0fd1726e79 !15172 Clean GraphKernel's codes from frontend 4 years ago
  yangzhenzhang 9cdd70433f add scatterupdate op 4 years ago
  yangzhenzhang d070af122f add topk op 4 years ago
  dayschan 771e3f61f3 Clean GraphKernel's codes from frontend 4 years ago
  yangzhenzhang f9f5df368e add gathernd op 4 years ago
  yangzhenzhang bcd2ecc403 check layouts for shared parameter 4 years ago
  yao_yf a83fb3316b fix parallel timeout 4 years ago
  yao_yf 4d0635eabe set parallel communication init flag in parallel ut 4 years ago
  dingpeifei 87e41aaeee IR operators of GPU and CPU are unified as batchnorm 4 years ago
  mindspore-ci-bot 7454ac8ecd !13382 [PipelineSplit]change pipeline key word 4 years ago
  lichenever a2b2727ba8 change_pipeline_key_word 4 years ago
  LianLiguang 17b9758543 unify range ops 4 years ago
  mindspore-ci-bot 7ba21f8d8c !12900 Add communication parallel mode. 4 years ago
  liujunzhu 6541b96c40 Add communication parallel mode. 4 years ago
  Ziyan ec9793861f fix grad accu 4 years ago
  mindspore-ci-bot 7ff2b3b499 !12781 fix bug of amp bn cast 4 years ago
  caifubi a6959c2a13 fix bn cast bug 4 years ago
  yangzhenzhang a70d616841 mini step grad accumulation 4 years ago
  wangshuide2020 72e938eb06 change dimension of input for FusedBatchNormEx from 2D to 4D in test_two_matmul_batchnorm_ex. 4 years ago
  He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad 4 years ago
  jinyaohui 30a27b2adb modify Gelu、FastGelu to GeLU and FastGeLU 4 years ago
  mindspore-ci-bot 74652eb942 !12044 modify pack to stack 4 years ago
  jinyaohui 8022f9a6ed modify pack to stack 4 years ago
  yangzhenzhang 726ea32778 merge parameter slice compile graph only once 4 years ago
  l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master 4 years ago
  mindspore-ci-bot 9fa0499fa0 Change GatherV2 to Gather r1.1 to master 4 years ago
  yangzhenzhang cbca482e59 delete useless parameter in pipeline parallel 4 years ago
  yangzhenzhang 7303c3d3b8 add group ckpt 4 years ago
  lilei 9a45c4419c modify batch_normal 4 years ago
  yangzhenzhang 9da3f9bec9 mini step grad accumulation 4 years ago
  mindspore-ci-bot 2e684df5b1 !10686 fix infer rank list typo and add testcase 4 years ago
  Ziyan 2c3b99ce91 fix infer rank list typo 4 years ago
  mindspore-ci-bot b67aaf6773 !9832 expose_allgather_fusion_to_users 4 years ago
  Ziyan bbf8ec82b9 expose allgather fusion interface to users 5 years ago
  ms_yan deb1e6e965 use from_numpy and add do_copy option 5 years ago
  Ziyan c5c905fdf5 add restriction for opt shard 5 years ago
  huangxinjing a8446af1ab Fix condition check 5 years ago
  jjfeing 1984cf8e20 unify mindir 5 years ago
  mindspore-ci-bot d5db8872fd !9834 Fix wrong input argument of Reshape for multi field embedding 5 years ago
  huangxinjing 996ee72c50 Fix embedding layer 5 years ago
  yao_yf 19fe28cb9b hange strategys of last nodes in eval/predict at auto parallel mode 5 years ago
  Xiaoda Zhang e78228603b move parallel-related black-list to core/ir, and fix the cloneCNode bug 5 years ago
  mindspore-ci-bot ec3983b77d !9577 support distributed predict 5 years ago
  Ziyan e7e9dae54d support distributed predict 5 years ago
  Xiaoda Zhang 9a9e3a751e set cnode's fullname when cloning 5 years ago
  lichenever 818e920f02 fix_pipeline_split_param_shared_bug 5 years ago
  lichenever 78e131cf15 pipeline_split adapt parallel 5 years ago
  yangzhenzhang 7b33f3e2ac gatherv2 axis split repeated calculation 5 years ago