95 Commits (693b4cfdc8aaffd69014e94f5909115cd2b7bcff)

Author SHA1 Message Date
  yangzhenzhang 689e50a3d0 fix grad accu bug for no used parameter 5 years ago
  mindspore-ci-bot 29bf2909b2 !13105 insert mirror before load 5 years ago
  yangzhenzhang 6eadd241a0 insert mirror before load 5 years ago
  Ziyan 4109308e34 insert parallel optimizer once 5 years ago
  Ziyan ec9793861f fix grad accu 5 years ago
  mindspore-ci-bot 7fcce73c51 !12700 add grad accumulation combined with optimizer parallel 5 years ago
  chendongsheng db0a6f1e19 replace ps-lite 5 years ago
  yangzhenzhang a70d616841 mini step grad accumulation 5 years ago
  huangbingjian 0bbd95d7a0 modify CheckpointStrategy to adapt load operator 5 years ago
  mindspore-ci-bot 2c5d19260e !12434 fix operator_info is null 5 years ago
  mindspore-ci-bot da6e6728b1 !12515 fix shape_ptr is nullptr:Umonad should not get shape 5 years ago
  Margaret_wangrui 0aaa31764e Do not get shape for monad type 5 years ago
  Margaret_wangrui a5fa4918f5 fix operator_info is null 5 years ago
  yangzhenzhang 70aa0dc5e2 modify get output layout 5 years ago
  huangbingjian b56fc0c2af [auto-monad] Do not insert VirtualDiv after UpdateState 5 years ago
  He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad 5 years ago
  mindspore-ci-bot b189f177bb Change tuple_getitem to TupleGetItem and some other ops, merge from r1.1 to master 5 years ago
  mindspore-ci-bot 9fa0499fa0 Change GatherV2 to Gather r1.1 to master 5 years ago
  yangzhenzhang 7303c3d3b8 add group ckpt 5 years ago
  yangzhenzhang 9da3f9bec9 mini step grad accumulation 5 years ago
  Ziyan 98566ddc07 enable gradients mean in opt shard 5 years ago
  lichenever 39306d64fb opt_pipeline_split 5 years ago
  mindspore-ci-bot b67aaf6773 !9832 expose_allgather_fusion_to_users 5 years ago
  Ziyan bbf8ec82b9 expose allgather fusion interface to users 5 years ago
  lichenever 9595502278 optimizer_pipline_split 5 years ago
  yao_yf 19fe28cb9b hange strategys of last nodes in eval/predict at auto parallel mode 5 years ago
  Xiaoda Zhang e78228603b move parallel-related black-list to core/ir, and fix the cloneCNode bug 5 years ago
  lizhenyu e3f7ae61db add ps cache manager 5 years ago
  Xiaoda Zhang 14d4926cf0 simplifying step-auto-parallel 5 years ago
  mindspore-ci-bot 4bb7303463 !9424 [PipelineSplit]Fix Shared Parameter bug 5 years ago
  mindspore-ci-bot 6b9e402790 !9396 enable allgather fusion 5 years ago
  lichenever 818e920f02 fix_pipeline_split_param_shared_bug 5 years ago
  Ziyan e29f5c96cb enable_allgather_fusion 5 years ago
  lichenever 78e131cf15 pipeline_split adapt parallel 5 years ago
  huangxinjing f2d5f14e37 Fix review bot 5 years ago
  yangzhenzhang 278e82a849 update pipeline parallel 5 years ago
  yangzhenzhang d4d6c4beae update get device list in parallel ops 5 years ago
  mindspore-ci-bot 2bf165c0b4 !8557 run cast before allgather in parallel optimzier 5 years ago
  yangzhenzhang 0c2c76d037 update get rank in parallel ops 5 years ago
  Ziyan 0ddb754edb run cast before parallel optimizer 5 years ago
  yangzhenzhang 0a79ab82ae add parallel ops 5 years ago
  Yi Huaijie d7faa77b5e support int64 shape 5 years ago
  lichenever 7c7006f347 fix bug if input not used 5 years ago
  Ziyan c33f2cd796 fix auto optimizer weight shard 5 years ago
  huangxinjing bf5d21770a Add UnsortedSegmentSum and UnosrtedSemenMin For Oparallel Implements 5 years ago
  mindspore-ci-bot 78f795971b !7818 add recursion limit for FindParallelCareNode 5 years ago
  mindspore-ci-bot 08e6ac0b09 !7805 support split ValueList 5 years ago
  Yi Huaijie 3102c4ff8d support split ValueList 5 years ago
  yangzhenzhang 92d02b7aff add recursion limit 5 years ago
  yao_yf 65d8e63580 set last node data parallel or repeat calculate in eval/predict 5 years ago