355 Commits (f27a16c8f801469cd4aaa68ae5ec2c19c7849d57)

Author SHA1 Message Date
  huangbingjian 53b31abf12 remove useless depend 4 years ago
  yangzhenzhang ef0361a449 fix bugs for conv2d 4 years ago
  Xiaoda Zhang 4b4b3cdaf4 add reduceany operator and extend onehot to multi-dimensions 4 years ago
  huangxinjing 615d1a179d Add transformer layer 4 years ago
  i-robot 9f296c58d6 !20960 [AutoParallel]Add replace graph for conv2d 4 years ago
  lichenever a7f8024c29 add_replace_graph_for_conv2d 4 years ago
  yao_yf dc7dc7d3fa dataset strategy set 4 years ago
  yangzhenzhang 80e5cc0e52 add parallel op for gatherd 4 years ago
  Xiaoda Zhang bb5d4212f7 enable All2All in infering redistribution ops 4 years ago
  lichenever 3c7cfb7c08 auto_parallel_support_control_flow 4 years ago
  i-robot a7d40fc220 !20520 [AutoParallel]Add op AllToAllv 4 years ago
  lichenever 8c1998fd6b add_op_AllToAllv 4 years ago
  i-robot c9d3c1d346 !20411 enable optimizer parallel for inference 4 years ago
  yangzhenzhang b31cd27a08 update check strategy for conv2d 4 years ago
  Ziyan 1c9166e0a6 remove restriction for opt shard in inference 4 years ago
  Xiaoda Zhang 04381273b3 Add the sharding propagation function: 4 years ago
  chenhaozhe 086a871975 Change Loss to LossBase 4 years ago
  lichenever db8850a4a3 pipeline_support_predict_master 4 years ago
  Ziyan be1f5a43d7 opt shard fit micro batch 4 years ago
  yangzhenzhang 69acf757d0 add parallel op for conv2d backprop input 4 years ago
  yangzhenzhang 24370b5613 add parallel op for maxpool 4 years ago
  yangzhenzhang af0d28de48 add parallel op for batchnorm 4 years ago
  i-robot 85d860e6a2 !16457 [AutoParallel]pipeline_split_adapt_master 4 years ago
  lichenever db5d508356 pipeline_split_adapt_master 4 years ago
  yangzhenzhang 7a40741048 add parallel operator for conv2d 4 years ago
  Ziyan 95ac0f6d58 fix optimizer weight shard config 4 years ago
  chenhaozhe 9da8534396 change _Loss to Loss 4 years ago
  mindspore-ci-bot 1c8fda25ef !16478 handle load op in step parallel 4 years ago
  mindspore-ci-bot b45b63fc58 !17239 add parallel gathernd test case 4 years ago
  Wan Hanyang c51dff2634 add parallel gathernd test case 4 years ago
  Wan Hanyang 3ce521d78f add parallel layernorm test case 4 years ago
  Ziyan 4b17493e52 handle load in step parallel 4 years ago
  yangzhenzhang d711d98f07 clean duplicate code 4 years ago
  yao_yf 732d13ccff parallel dropout support repeated compute 4 years ago
  yangzhenzhang 6aa3859131 modify check strategy for scatter update 4 years ago
  Ziyan 2a752f24bf enable not fully use opt shard 5 years ago
  yao_yf e967f1939b parallel envs variable check 4 years ago
  mindspore-ci-bot 78fcdbc7c9 !15790 modify scatter update op 4 years ago
  yangzhenzhang 075f680a42 modify scatter update op 4 years ago
  Xiaoda Zhang aa52399200 Making the Tile operator to have more parallel strategies 5 years ago
  yao_yf 093ef784de dont insert virtualoutput for scalar 4 years ago
  mindspore-ci-bot 3cfd58e8e0 !15643 insert virtual div only for first input of dropout do mask 4 years ago
  mindspore-ci-bot 49d6c029a6 !15542 split axis and batch for gather 4 years ago
  yangzhenzhang 5828973978 fix bug for dropout do mask 4 years ago
  yao_yf 21276408b8 parallel virtual_out_ops 5 years ago
  yangzhenzhang 213922574e split axis and batch for gatherv2 4 years ago
  yangzhenzhang c2ca2232c5 add select op 5 years ago
  mindspore-ci-bot 1c9d3c0aa0 !15353 add parallel operator for scatter update 4 years ago
  mindspore-ci-bot 0fd1726e79 !15172 Clean GraphKernel's codes from frontend 4 years ago
  yangzhenzhang 9cdd70433f add scatterupdate op 5 years ago