316 Commits (cbca36d723a1d3cba0e58cc2362f0735d77312ae)

Author SHA1 Message Date
  Xiaoda Zhang aa52399200 Making the Tile operator to have more parallel strategies 4 years ago
  yao_yf 093ef784de dont insert virtualoutput for scalar 4 years ago
  mindspore-ci-bot 3cfd58e8e0 !15643 insert virtual div only for first input of dropout do mask 4 years ago
  mindspore-ci-bot 49d6c029a6 !15542 split axis and batch for gather 4 years ago
  yangzhenzhang 5828973978 fix bug for dropout do mask 4 years ago
  yao_yf 21276408b8 parallel virtual_out_ops 4 years ago
  yangzhenzhang 213922574e split axis and batch for gatherv2 4 years ago
  yangzhenzhang c2ca2232c5 add select op 4 years ago
  mindspore-ci-bot 1c9d3c0aa0 !15353 add parallel operator for scatter update 4 years ago
  mindspore-ci-bot 0fd1726e79 !15172 Clean GraphKernel's codes from frontend 4 years ago
  yangzhenzhang 9cdd70433f add scatterupdate op 4 years ago
  yangzhenzhang d070af122f add topk op 4 years ago
  dayschan 771e3f61f3 Clean GraphKernel's codes from frontend 4 years ago
  yangzhenzhang f9f5df368e add gathernd op 4 years ago
  yangzhenzhang bcd2ecc403 check layouts for shared parameter 4 years ago
  yao_yf a83fb3316b fix parallel timeout 4 years ago
  yao_yf 4d0635eabe set parallel communication init flag in parallel ut 4 years ago
  dingpeifei 87e41aaeee IR operators of GPU and CPU are unified as batchnorm 4 years ago
  mindspore-ci-bot 7454ac8ecd !13382 [PipelineSplit]change pipeline key word 4 years ago
  lichenever a2b2727ba8 change_pipeline_key_word 4 years ago
  LianLiguang 17b9758543 unify range ops 4 years ago
  mindspore-ci-bot 7ba21f8d8c !12900 Add communication parallel mode. 4 years ago
  liujunzhu 6541b96c40 Add communication parallel mode. 4 years ago
  Ziyan ec9793861f fix grad accu 4 years ago
  mindspore-ci-bot 7ff2b3b499 !12781 fix bug of amp bn cast 4 years ago
  caifubi a6959c2a13 fix bn cast bug 4 years ago
  yangzhenzhang a70d616841 mini step grad accumulation 5 years ago
  wangshuide2020 72e938eb06 change dimension of input for FusedBatchNormEx from 2D to 4D in test_two_matmul_batchnorm_ex. 4 years ago
  He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad 4 years ago
  jinyaohui 30a27b2adb modify Gelu、FastGelu to GeLU and FastGeLU 4 years ago
  mindspore-ci-bot 74652eb942 !12044 modify pack to stack 4 years ago
  jinyaohui 8022f9a6ed modify pack to stack 4 years ago
  yangzhenzhang 726ea32778 merge parameter slice compile graph only once 4 years ago
  l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master 4 years ago
  mindspore-ci-bot 9fa0499fa0 Change GatherV2 to Gather r1.1 to master 4 years ago
  yangzhenzhang cbca482e59 delete useless parameter in pipeline parallel 4 years ago
  yangzhenzhang 7303c3d3b8 add group ckpt 4 years ago
  lilei 9a45c4419c modify batch_normal 4 years ago
  yangzhenzhang 9da3f9bec9 mini step grad accumulation 5 years ago
  mindspore-ci-bot 2e684df5b1 !10686 fix infer rank list typo and add testcase 5 years ago
  Ziyan 2c3b99ce91 fix infer rank list typo 5 years ago
  mindspore-ci-bot b67aaf6773 !9832 expose_allgather_fusion_to_users 5 years ago
  Ziyan bbf8ec82b9 expose allgather fusion interface to users 5 years ago
  ms_yan deb1e6e965 use from_numpy and add do_copy option 5 years ago
  Ziyan c5c905fdf5 add restriction for opt shard 5 years ago
  huangxinjing a8446af1ab Fix condition check 5 years ago
  jjfeing 1984cf8e20 unify mindir 5 years ago
  mindspore-ci-bot d5db8872fd !9834 Fix wrong input argument of Reshape for multi field embedding 5 years ago
  huangxinjing 996ee72c50 Fix embedding layer 5 years ago
  yao_yf 19fe28cb9b hange strategys of last nodes in eval/predict at auto parallel mode 5 years ago