67 Commits (ea96cbcef24b5eb3a827fbdb196ce8faebf747db)

Author SHA1 Message Date
  Yi Huaijie 7857d59c82 dropout do mask only replace first input of dropout_gen_mask of the subgraph instead of the whole sub graph 5 years ago
  suteng da586a6177 回退 'Pull Request !2078 : replace first input of dropout_gen_mask of the subgraph instead of the whole sub graph' 5 years ago
  mindspore-ci-bot b1ff4c15c2 !2078 replace first input of dropout_gen_mask of the subgraph instead of the whole sub graph 5 years ago
  Yi Huaijie 6c85fc9f9f dropout do mask only replace first input of 5 years ago
  jjfeing caab25e09b tbe select broadcast reduce dynamic 5 years ago
  lichenever e0e055a0b8 add sparse gatherv2 5 years ago
  Yi Huaijie e5c351690b support load full dataset on each device 5 years ago
  kingfo 38436f929f move hook function to primtivePy class 5 years ago
  Xiaoda Zhang 1cfb52bc0e add the reshape part of the embeddinglookup backward operator 5 years ago
  mindspore-ci-bot d5f55f0820 !1769 [AutoParallel]GatherV2_support_host_device 5 years ago
  BowenK 96379faa3a Remove ZerosLikeTensor and sub with ZerosLike 5 years ago
  lichenever 1437966c98 gatherv2_support_host_and_device 5 years ago
  yangzhenzhang 19bd830539 support forward reduce scatter for matmul 5 years ago
  mindspore-ci-bot f523a0f83c !1600 [AutoParallel]Fix GatherV2 bug 5 years ago
  lichenever c223fde566 fix auto parallel gatherv2 bug 5 years ago
  leopz 4508134ceb add tensor_minnie and separate py from ir 5 years ago
  yangzhenzhang 7c237620ba add sigmoid op 5 years ago
  mindspore-ci-bot e87e6b38b0 !1355 [AutoParallel]Fix GatherV2 distributed op 5 years ago
  lichenever 390a86effb fix gatherv2 6 years ago
  Xiaoda Zhang 9f4b8a3cd1 changing the succive edges order in GetAliveSuccEdges() so that Triangle and Star Elimination can be merged into particular node; adding some check information 6 years ago
  mindspore-ci-bot d11dc8276d !1181 fix gatherv2 replace graph in auto parallel 6 years ago
  yao_yf 06d35d8d18 fix gatherv2 replace graph in auto parallel 6 years ago
  mindspore-ci-bot b124bf38a1 !1152 [AutoParallel] dynamic output shape handling for Reduce series & Squeeze 6 years ago
  hongxing dc290d7959 support squeeze and reduce op 6 years ago
  Yi Huaijie 75ca84d260 INFO user when set_strategy not under [semi_]auto_parallel mode 6 years ago
  Xiaoda Zhang a05aa21cc2 calculating PEAK memory cost in the inference phase 6 years ago
  yao_yf b0921c15e9 xreshape tensor_redistrinution bug fix 6 years ago
  yao_yf 716def7c0a move InferStraByTensorInfo to tensor_info.h 6 years ago
  mindspore-ci-bot dd2062bf8d !1023 add_gatherv2_distributed_op 6 years ago
  lichenever 19a24b86ac add gatherv2 distributed op 6 years ago
  yao_yf f0bf438a55 reshape strategy search 6 years ago
  Xiaoda Zhang def8573275 implementing-searching-strategy-for-inference 6 years ago
  yao_yf 5a6540450e use param name as the key of strategy checkpoint 6 years ago
  yangzhenzhang 4750861054 fix layernorm bug 6 years ago
  yao_yf 425276d43d auto parallel prelu support prelu 6 years ago
  ch-l f806b72447 use DeviceMemory for memory control 6 years ago
  zhoufeng c2b3360d69 update clang format rule 6 years ago
  ougongchang 0ed6d9178e add Histogram summary operator 6 years ago
  mindspore-ci-bot 6e183fcc0f !385 [Auto parallel] Adjusting backward phase communication cost of some operators 6 years ago
  Xiaoda Zhang 79de8f4bdf Adjusting backward communication cost of some operators 6 years ago
  yangzhenzhang 36ffb66782 add parallel op for square 6 years ago
  yangzhenzhang 57cd9f8188 add parallel op for sigmoidloss 6 years ago
  yangzhenzhang 6d522f0a4f add parallel op for layernorm 6 years ago
  mindspore-ci-bot 2961c6bc59 !349 fix coding style check warning for auto parallel 6 years ago
  c00425699 8765810528 fix_coding_style_check_warning 6 years ago
  Xiaoda Zhang ffb2cb03a4 Change 'NOT_FULLY_USE_DEVICES' to 'FULLY_USE_DEVICES' and make ALL-1 user-specified-strategy valid in auto-parallel 6 years ago
  Xiaoda Zhang 0ac50a19f5 Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost. 6 years ago
  mindspore-ci-bot 39a46b9342 !245 Add bool type check in communication operator 6 years ago
  c00425699 d62f560b50 add_bool_type_check_in_comm_op 6 years ago
  lichenever b81cc6ea4f add minimum distributed op 6 years ago