384 Commits (062dcaaaa17664f4be6fb191d97f85c19eec8840)

Author SHA1 Message Date
  Zhang Qinghua a137fa1d0b Optimize the Executors routines. 4 years ago
  zhihenghu ce12c02343 Add Sparse Attention 4 years ago
  i-robot e6e1f37ae4 !22346 [Core] Fix the bug of scope setting when cloning nodes 4 years ago
  i-robot 8d00a8d803 !22360 Fix Transformer Mirror Error 4 years ago
  Xiaoda Zhang b2703879c6 fix the scope setting error when cloning nodes 4 years ago
  i-robot edcbb68d71 !22386 fix neighborexchange empty input case 4 years ago
  zhoufeng e5a1582e4b fix neighborexchange empty input case 4 years ago
  huangxinjing 62496d75f3 less the interface exposed 4 years ago
  yangzhenzhang 0b9b2a9458 add test cases 4 years ago
  lichenever 5812076512 Refactor_part_of_pipeline 4 years ago
  yangzhenzhang f1afaeac5a modify check strategy for conv2d 4 years ago
  ms_yan 36a8886ca2 Revert "[feat] [assistant] [I3T96T] add new Dataset operator CMUARCTICDataset" 4 years ago
  djc b077aa1cab [feat] [assistant] [I3T96T] add new Dataset operator CMUARCTICDataset 4 years ago
  djc 4e6f7dc97d [feat] [assistant] [I3T96X] add new Dataset operator LibriSpeechDataset 4 years ago
  huangxinjing d777742904 1. Move the class to mindspore.parallel, support activation sharding 4 years ago
  i-robot dde05c36b8 !21551 auto_parallel_context interface dataset_strategy union 4 years ago
  i-robot 415275ae17 !21805 support adafactor model parallel 4 years ago
  i-robot 0d839fa7c6 !21809 Improved Transformer Struture and Add Args Check 4 years ago
  i-robot a77a0b968d !21761 comm_recompute_interface. 4 years ago
  yangzhenzhang 7ca64d2235 auto parallel support adafactor opt 4 years ago
  yao_yf 5277b229be add cell comm recompute interface 4 years ago
  huangxinjing 18044aff0f 1. Add docstring, elimitate attention mask, tuple append the deocoder return layer past 4 years ago
  yao_yf a83bf73298 union auto_parallel_context interface dataset_strategy 4 years ago
  yao_yf e233880e41 fix reshape depend reshape in auto parallel 4 years ago
  i-robot 63445ff6fd !21627 alltoall exception handle 4 years ago
  yangzhenzhang d18c813ee4 check strategy for conv2d 4 years ago
  zhoufeng 03a56f2bb0 alltoall exception handle 4 years ago
  i-robot 4aaa8126a0 !21528 Add Parallel Print Support 4 years ago
  huangxinjing 92bad162bd Add print 4 years ago
  huangbingjian 53b31abf12 remove useless depend 4 years ago
  yangzhenzhang ef0361a449 fix bugs for conv2d 4 years ago
  Xiaoda Zhang 4b4b3cdaf4 add reduceany operator and extend onehot to multi-dimensions 4 years ago
  huangxinjing 615d1a179d Add transformer layer 4 years ago
  i-robot 9f296c58d6 !20960 [AutoParallel]Add replace graph for conv2d 4 years ago
  lichenever a7f8024c29 add_replace_graph_for_conv2d 4 years ago
  yao_yf dc7dc7d3fa dataset strategy set 4 years ago
  yangzhenzhang 80e5cc0e52 add parallel op for gatherd 4 years ago
  Xiaoda Zhang bb5d4212f7 enable All2All in infering redistribution ops 4 years ago
  lichenever 3c7cfb7c08 auto_parallel_support_control_flow 4 years ago
  i-robot a7d40fc220 !20520 [AutoParallel]Add op AllToAllv 4 years ago
  lichenever 8c1998fd6b add_op_AllToAllv 4 years ago
  i-robot c9d3c1d346 !20411 enable optimizer parallel for inference 4 years ago
  yangzhenzhang b31cd27a08 update check strategy for conv2d 4 years ago
  Ziyan 1c9166e0a6 remove restriction for opt shard in inference 4 years ago
  Xiaoda Zhang 04381273b3 Add the sharding propagation function: 4 years ago
  chenhaozhe 086a871975 Change Loss to LossBase 4 years ago
  lichenever db8850a4a3 pipeline_support_predict_master 4 years ago
  Ziyan be1f5a43d7 opt shard fit micro batch 4 years ago
  yangzhenzhang 69acf757d0 add parallel op for conv2d backprop input 4 years ago
  yangzhenzhang 24370b5613 add parallel op for maxpool 4 years ago