418 Commits (b6035cf1a135d130a3d2cc229ffe449bbef4294a)

Author SHA1 Message Date
  i-robot 415275ae17 !21805 support adafactor model parallel 4 years ago
  i-robot 0d839fa7c6 !21809 Improved Transformer Struture and Add Args Check 4 years ago
  i-robot a77a0b968d !21761 comm_recompute_interface. 4 years ago
  yangzhenzhang 7ca64d2235 auto parallel support adafactor opt 4 years ago
  yao_yf 5277b229be add cell comm recompute interface 4 years ago
  huangxinjing 18044aff0f 1. Add docstring, elimitate attention mask, tuple append the deocoder return layer past 4 years ago
  yao_yf a83bf73298 union auto_parallel_context interface dataset_strategy 4 years ago
  yao_yf e233880e41 fix reshape depend reshape in auto parallel 4 years ago
  i-robot 63445ff6fd !21627 alltoall exception handle 4 years ago
  yangzhenzhang d18c813ee4 check strategy for conv2d 4 years ago
  zhoufeng 03a56f2bb0 alltoall exception handle 4 years ago
  i-robot 4aaa8126a0 !21528 Add Parallel Print Support 4 years ago
  huangxinjing 92bad162bd Add print 4 years ago
  huangbingjian 53b31abf12 remove useless depend 4 years ago
  yangzhenzhang ef0361a449 fix bugs for conv2d 4 years ago
  Xiaoda Zhang 4b4b3cdaf4 add reduceany operator and extend onehot to multi-dimensions 4 years ago
  huangxinjing 615d1a179d Add transformer layer 4 years ago
  i-robot 9f296c58d6 !20960 [AutoParallel]Add replace graph for conv2d 4 years ago
  lichenever a7f8024c29 add_replace_graph_for_conv2d 4 years ago
  yao_yf dc7dc7d3fa dataset strategy set 4 years ago
  yangzhenzhang 80e5cc0e52 add parallel op for gatherd 4 years ago
  Xiaoda Zhang bb5d4212f7 enable All2All in infering redistribution ops 4 years ago
  lichenever 3c7cfb7c08 auto_parallel_support_control_flow 4 years ago
  i-robot a7d40fc220 !20520 [AutoParallel]Add op AllToAllv 4 years ago
  lichenever 8c1998fd6b add_op_AllToAllv 4 years ago
  i-robot c9d3c1d346 !20411 enable optimizer parallel for inference 4 years ago
  yangzhenzhang b31cd27a08 update check strategy for conv2d 4 years ago
  Ziyan 1c9166e0a6 remove restriction for opt shard in inference 4 years ago
  Xiaoda Zhang 04381273b3 Add the sharding propagation function: 4 years ago
  chenhaozhe 086a871975 Change Loss to LossBase 4 years ago
  lichenever db8850a4a3 pipeline_support_predict_master 4 years ago
  Ziyan be1f5a43d7 opt shard fit micro batch 4 years ago
  yangzhenzhang 69acf757d0 add parallel op for conv2d backprop input 4 years ago
  yangzhenzhang 24370b5613 add parallel op for maxpool 4 years ago
  yangzhenzhang af0d28de48 add parallel op for batchnorm 4 years ago
  i-robot 85d860e6a2 !16457 [AutoParallel]pipeline_split_adapt_master 4 years ago
  lichenever db5d508356 pipeline_split_adapt_master 4 years ago
  yangzhenzhang 7a40741048 add parallel operator for conv2d 4 years ago
  Ziyan 95ac0f6d58 fix optimizer weight shard config 4 years ago
  chenhaozhe 9da8534396 change _Loss to Loss 4 years ago
  mindspore-ci-bot 1c8fda25ef !16478 handle load op in step parallel 4 years ago
  mindspore-ci-bot b45b63fc58 !17239 add parallel gathernd test case 4 years ago
  Wan Hanyang c51dff2634 add parallel gathernd test case 4 years ago
  Wan Hanyang 3ce521d78f add parallel layernorm test case 4 years ago
  Ziyan 4b17493e52 handle load in step parallel 4 years ago
  yangzhenzhang d711d98f07 clean duplicate code 4 years ago
  yao_yf 732d13ccff parallel dropout support repeated compute 4 years ago
  yangzhenzhang 6aa3859131 modify check strategy for scatter update 4 years ago
  Ziyan 2a752f24bf enable not fully use opt shard 5 years ago
  yao_yf e967f1939b parallel envs variable check 4 years ago