61 Commits (dfd894c19093e3fa878ff2b46e209fe1c1dbdc22)

Author SHA1 Message Date
  huangxinjing f2d5f14e37 Fix review bot 5 years ago
  yangzhenzhang 278e82a849 update pipeline parallel 5 years ago
  yangzhenzhang d4d6c4beae update get device list in parallel ops 5 years ago
  mindspore-ci-bot 2bf165c0b4 !8557 run cast before allgather in parallel optimzier 5 years ago
  yangzhenzhang 0c2c76d037 update get rank in parallel ops 5 years ago
  Ziyan 0ddb754edb run cast before parallel optimizer 5 years ago
  yangzhenzhang 0a79ab82ae add parallel ops 5 years ago
  Yi Huaijie d7faa77b5e support int64 shape 5 years ago
  lichenever 7c7006f347 fix bug if input not used 5 years ago
  Ziyan c33f2cd796 fix auto optimizer weight shard 5 years ago
  huangxinjing bf5d21770a Add UnsortedSegmentSum and UnosrtedSemenMin For Oparallel Implements 5 years ago
  mindspore-ci-bot 78f795971b !7818 add recursion limit for FindParallelCareNode 5 years ago
  mindspore-ci-bot 08e6ac0b09 !7805 support split ValueList 5 years ago
  Yi Huaijie 3102c4ff8d support split ValueList 5 years ago
  yangzhenzhang 92d02b7aff add recursion limit 5 years ago
  yao_yf 65d8e63580 set last node data parallel or repeat calculate in eval/predict 5 years ago
  Ziyan 069318899a refactor get cnode strategy 5 years ago
  Xiaoda Zhang fba2bfeb54 overwrite strategies for star graph structure 5 years ago
  Ziyan adc92496e8 disable comm fusion in parallel optimizer temp 5 years ago
  lichenever cfffff2875 add check for allreduce fusion 5 years ago
  Ziyan ddc0113058 enable parallel optimizer in auto parallel 5 years ago
  lichenever 6dd2c75948 fix_auto_parallel_find_loss_bug 5 years ago
  lichenever 23a38aa1dc semi_auto_support_gpt2 5 years ago
  mindspore-ci-bot 9bd34a1b29 !6673 Add stage information for ops and strategy 5 years ago
  lichenever e2c8a0bbc5 support_gpt2_compile_graph 5 years ago
  huangxinjing 4ef439e27b Add stage information for ops and strategy 5 years ago
  Yi Huaijie 6066b16838 implement parallel Pack 5 years ago
  lichenever d4bba3f1d2 fix_auto_parallel_find_loss_bug 5 years ago
  mindspore-ci-bot 754f2b774c !6409 add batch parallel info black list 5 years ago
  yao_yf b70204c080 auto parallel context add notes and func mv 5 years ago
  Ziyan 9e5248497b add batch parallel info black list 5 years ago
  lichenever 6b2a9de09f fix auto parallel mutigrpah bug 5 years ago
  Yi Huaijie 0d478130f6 fix code check error 5 years ago
  yao_yf d4cfe55c04 rename mirror_mean to gradients_mean 5 years ago
  mindspore-ci-bot c064c01b6b !5729 [AutoParallel]Add FuseBatchNormEx op 5 years ago
  lichenever d22f506431 add BatchNormEx op 5 years ago
  yao_yf 05c003ae6b origin/semi_auto_parallel_reshape_parameter_has_another_user 5 years ago
  yao_yf 8f7aa5bd5a auto parallel context modify 5 years ago
  yangzhenzhang fbda03bbcc check parameter split 5 years ago
  mindspore-ci-bot 3725062582 !5229 [AutoParallel]Fix CodeDex 5 years ago
  lichenever 49aa4b7686 fix codedex 5 years ago
  mindspore-ci-bot 8eff6c96b4 !5045 merge slice parameter checkpoints in wide&deep 5 years ago
  Wei Luning 24a10225cf change base class of ref to tensor in cpp 5 years ago
  yao_yf eeede168fa wide_and_deep merge ckpt in eval 5 years ago
  lichenever 221a801395 auto parallel support bert 5 years ago
  zhoufeng 663278112f optimize code compile performance 5 years ago
  Wei Luning c1c30a44f1 rename param_value -> param_info 5 years ago
  yangzhenzhang f4bb43bbaf add concat op 5 years ago
  yao_yf 60a9fb0001 add_tensor_layout_in_stra_ckpt 5 years ago
  Yi Huaijie 80bdcab982 temporarily cast between int64 and int32 to wait ME support int64 5 years ago