29 Commits (6fcd6cab684bd2d2ae2da5613efa3aa4e2cd7d5a)

Author SHA1 Message Date
  Ziyan 2a752f24bf enable not fully use opt shard 5 years ago
  Xiaoda Zhang 5fecfe92a6 code style warnings fixing 4 years ago
  yao_yf 21276408b8 parallel virtual_out_ops 5 years ago
  yao_yf 17354e3c4e fix find nodes with param 4 years ago
  yangzhenzhang bcd2ecc403 check layouts for shared parameter 4 years ago
  yangzhenzhang a70d616841 mini step grad accumulation 5 years ago
  yangzhenzhang 7303c3d3b8 add group ckpt 5 years ago
  yangzhenzhang 9da3f9bec9 mini step grad accumulation 5 years ago
  yao_yf 19fe28cb9b hange strategys of last nodes in eval/predict at auto parallel mode 5 years ago
  Xiaoda Zhang 14d4926cf0 simplifying step-auto-parallel 5 years ago
  lichenever 78e131cf15 pipeline_split adapt parallel 5 years ago
  Yi Huaijie d7faa77b5e support int64 shape 5 years ago
  lichenever 7c7006f347 fix bug if input not used 5 years ago
  Ziyan c33f2cd796 fix auto optimizer weight shard 5 years ago
  yangzhenzhang 92d02b7aff add recursion limit 5 years ago
  yao_yf 65d8e63580 set last node data parallel or repeat calculate in eval/predict 5 years ago
  Ziyan 069318899a refactor get cnode strategy 5 years ago
  Xiaoda Zhang fba2bfeb54 overwrite strategies for star graph structure 5 years ago
  Ziyan ddc0113058 enable parallel optimizer in auto parallel 5 years ago
  lichenever d4bba3f1d2 fix_auto_parallel_find_loss_bug 5 years ago
  lichenever 6b2a9de09f fix auto parallel mutigrpah bug 5 years ago
  yao_yf 05c003ae6b origin/semi_auto_parallel_reshape_parameter_has_another_user 5 years ago
  yangzhenzhang fbda03bbcc check parameter split 5 years ago
  yao_yf eeede168fa wide_and_deep merge ckpt in eval 5 years ago
  zhousiyi d0e58dd765 remove ccsrc/common.h 5 years ago
  yangzhenzhang f4bb43bbaf add concat op 5 years ago
  yao_yf 60a9fb0001 add_tensor_layout_in_stra_ckpt 5 years ago
  liubuyu 76dc80e7b7 Unified code style 5 years ago
  liubuyu 43c79eb853 mindspore path adjust 5 years ago