301 Commits (f451625ea9c4a09a2ffc8da413224d0265c91067)

Author SHA1 Message Date
  yao_yf a83fb3316b fix parallel timeout 4 years ago
  yao_yf 4d0635eabe set parallel communication init flag in parallel ut 4 years ago
  dingpeifei 87e41aaeee IR operators of GPU and CPU are unified as batchnorm 4 years ago
  mindspore-ci-bot 7454ac8ecd !13382 [PipelineSplit]change pipeline key word 4 years ago
  lichenever a2b2727ba8 change_pipeline_key_word 4 years ago
  LianLiguang 17b9758543 unify range ops 4 years ago
  mindspore-ci-bot 7ba21f8d8c !12900 Add communication parallel mode. 4 years ago
  liujunzhu 6541b96c40 Add communication parallel mode. 4 years ago
  Ziyan ec9793861f fix grad accu 4 years ago
  mindspore-ci-bot 7ff2b3b499 !12781 fix bug of amp bn cast 4 years ago
  caifubi a6959c2a13 fix bn cast bug 4 years ago
  yangzhenzhang a70d616841 mini step grad accumulation 5 years ago
  wangshuide2020 72e938eb06 change dimension of input for FusedBatchNormEx from 2D to 4D in test_two_matmul_batchnorm_ex. 4 years ago
  He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad 4 years ago
  jinyaohui 30a27b2adb modify Gelu、FastGelu to GeLU and FastGeLU 4 years ago
  mindspore-ci-bot 74652eb942 !12044 modify pack to stack 4 years ago
  jinyaohui 8022f9a6ed modify pack to stack 4 years ago
  yangzhenzhang 726ea32778 merge parameter slice compile graph only once 4 years ago
  l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master 4 years ago
  mindspore-ci-bot 9fa0499fa0 Change GatherV2 to Gather r1.1 to master 4 years ago
  yangzhenzhang cbca482e59 delete useless parameter in pipeline parallel 4 years ago
  yangzhenzhang 7303c3d3b8 add group ckpt 4 years ago
  lilei 9a45c4419c modify batch_normal 4 years ago
  yangzhenzhang 9da3f9bec9 mini step grad accumulation 5 years ago
  mindspore-ci-bot 2e684df5b1 !10686 fix infer rank list typo and add testcase 5 years ago
  Ziyan 2c3b99ce91 fix infer rank list typo 5 years ago
  mindspore-ci-bot b67aaf6773 !9832 expose_allgather_fusion_to_users 5 years ago
  Ziyan bbf8ec82b9 expose allgather fusion interface to users 5 years ago
  ms_yan deb1e6e965 use from_numpy and add do_copy option 5 years ago
  Ziyan c5c905fdf5 add restriction for opt shard 5 years ago
  huangxinjing a8446af1ab Fix condition check 5 years ago
  jjfeing 1984cf8e20 unify mindir 5 years ago
  mindspore-ci-bot d5db8872fd !9834 Fix wrong input argument of Reshape for multi field embedding 5 years ago
  huangxinjing 996ee72c50 Fix embedding layer 5 years ago
  yao_yf 19fe28cb9b hange strategys of last nodes in eval/predict at auto parallel mode 5 years ago
  Xiaoda Zhang e78228603b move parallel-related black-list to core/ir, and fix the cloneCNode bug 5 years ago
  mindspore-ci-bot ec3983b77d !9577 support distributed predict 5 years ago
  Ziyan e7e9dae54d support distributed predict 5 years ago
  Xiaoda Zhang 9a9e3a751e set cnode's fullname when cloning 5 years ago
  lichenever 818e920f02 fix_pipeline_split_param_shared_bug 5 years ago
  lichenever 78e131cf15 pipeline_split adapt parallel 5 years ago
  yangzhenzhang 7b33f3e2ac gatherv2 axis split repeated calculation 5 years ago
  yangzhenzhang 7278f5c109 update_gatherv2_op 5 years ago
  Xiaoda Zhang c79e988b0d set fullname for reshape after reshape-elimination 5 years ago
  yao_yf 9cda064716 auto parallel predict 5 years ago
  mindspore-ci-bot 3d6d820612 !8154 Add nn.MultiFieldEmbedding for the embedding lookup opearations 5 years ago
  mindspore-ci-bot d915fd9b96 !8592 change repeat_elements to a composite op 5 years ago
  huangxinjing b0deb7a289 Add dense embedding 5 years ago
  tom__chen a52cd685fc change repeat_element op to a composite op 5 years ago
  huangxinjing 89e7778497 Add UnsortedSegmentMax Operation 5 years ago