44 Commits (04bc2a938eef08dd1231a2a82f6d4e4e8dd258ea)

Author SHA1 Message Date
  Xiaoda Zhang 9f4b8a3cd1 changing the succive edges order in GetAliveSuccEdges() so that Triangle and Star Elimination can be merged into particular node; adding some check information 5 years ago
  hongxing 2f7c5f7a59 fix edge bug 5 years ago
  hongxing 9c8e750c9e maximize strategy dynamically 5 years ago
  hongxing d84ccfe87f complete element-wise list 5 years ago
  hongxing 8f04adf1c3 feature : eliminate graph 5 years ago
  hongxing dc290d7959 support squeeze and reduce op 5 years ago
  Xiaoda Zhang a05aa21cc2 calculating PEAK memory cost in the inference phase 5 years ago
  mindspore-ci-bot 7dc31684b6 !1073 fix reshape tensor redistribution bug 5 years ago
  yao_yf b0921c15e9 xreshape tensor_redistrinution bug fix 5 years ago
  mindspore-ci-bot 95d4665db9 !1051 auto parallel reshape reconstruct 5 years ago
  mindspore-ci-bot 2dd97a0780 !1028 [AutoParallel] refined memory check & added odd num tensor shape support 5 years ago
  yao_yf 716def7c0a move InferStraByTensorInfo to tensor_info.h 5 years ago
  mindspore-ci-bot dd2062bf8d !1023 add_gatherv2_distributed_op 5 years ago
  lichenever 19a24b86ac add gatherv2 distributed op 5 years ago
  yao_yf f0bf438a55 reshape strategy search 5 years ago
  sheng 59bb014144 refine mem ctl & odd num ctl & fuse str writeback 5 years ago
  Xiaoda Zhang def8573275 implementing-searching-strategy-for-inference 5 years ago
  ch-l caac6bce5c adjustements w.r.t. distributed execution 5 years ago
  mindspore-ci-bot 69ab46e624 !727 [AutoParallel] complete cost for recursive programming 5 years ago
  ch-l 309060b1c2 complet cost models 5 years ago
  mindspore-ci-bot 5b3327d103 !746 reducescatter backforward operator 5 years ago
  lirongzhen1 0b4648881b add reducescatter bprop 5 years ago
  Xiaoda Zhang e227415673 support-the-multiple-subgraphs-in-the-ANF 5 years ago
  ch-l f806b72447 use DeviceMemory for memory control 5 years ago
  zhoufeng c2b3360d69 update clang format rule 5 years ago
  mindspore-ci-bot 46acf23825 !405 [AutoParallel] Adapte rec-prog generator to new parser 5 years ago
  mindspore-ci-bot 5b6b1ad727 !394 [AutoParallel] Simplify rec-prog parser mechanism 5 years ago
  ch-l c71234f383 improve rec-prog str generator 5 years ago
  Ziyan 0d208e00bd Model ALLTOALL as a single operator in cost model; scale the ALLTOALL, 6 years ago
  Chong b1f5e44cd4 improve parser 5 years ago
  Xiaoda Zhang 79de8f4bdf Adjusting backward communication cost of some operators 5 years ago
  yangzhenzhang 6d522f0a4f add parallel op for layernorm 5 years ago
  Xiaoda Zhang ffb2cb03a4 Change 'NOT_FULLY_USE_DEVICES' to 'FULLY_USE_DEVICES' and make ALL-1 user-specified-strategy valid in auto-parallel 5 years ago
  Xiaoda Zhang 0ac50a19f5 Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost. 6 years ago
  c00425699 d62f560b50 add_bool_type_check_in_comm_op 6 years ago
  c00425699 c8cdb6b331 support distributed GatherV2 operator 6 years ago
  yangzhenzhang b34c0e7a17 add parallel op for dropoutdomask 6 years ago
  c00425699 b413638f23 refactor OperatorCostPtr in OperatorInfo 6 years ago
  Xiaoda Zhang a153fad874 This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed. 6 years ago
  Su Teng 60b68a1470 sort include file in parallel dir 6 years ago
  Xiaoda Zhang 3d35792877 change_star_elimination: make the non-identity triangle_eliminatin exact 6 years ago
  Xiaoda Zhang c080ec7874 change star elimination: remove some redundant and checking works 6 years ago
  lichenever f946aea10d fix grpah mode loop sink bug in auto parallel 6 years ago
  zhunaipan 930a1fb0a8 initial version 6 years ago