77 Commits (a8d81c8b7bf9d54ae66ce8cab15836610e624724)

Author SHA1 Message Date
  mindspore-ci-bot 5b3327d103 !746 reducescatter backforward operator 5 years ago
  lirongzhen1 0b4648881b add reducescatter bprop 5 years ago
  mindspore-ci-bot 63712848e2 !494 Split ccsrc cmake to individual sub-directories 5 years ago
  Xiaoda Zhang e227415673 support-the-multiple-subgraphs-in-the-ANF 5 years ago
  zhoufeng b681cec8f2 cmake refactor 5 years ago
  yangzhenzhang 4750861054 fix layernorm bug 5 years ago
  yangzhenzhang 36a62576e8 support forward graph 5 years ago
  mindspore-ci-bot ce71c17933 !645 auto parallel prelu operator support broadcast 5 years ago
  mindspore-ci-bot 84d5e4f923 !643 [AutoParallel]Support reshape parameter 5 years ago
  mindspore-ci-bot 00859ae119 !586 enable/disable allreduce_fusion 5 years ago
  lichenever 2ab211ae04 support reshape parameter 5 years ago
  yao_yf 425276d43d auto parallel prelu support prelu 5 years ago
  lirongzhen 4ff418084c enable/disable allreduce_fusion 6 years ago
  mindspore-ci-bot 53d2da5fe4 !264 static_analysis: remove useless cache in TrivialPrimEvaluator and add cache for PythonPrimEvaluator 5 years ago
  mindspore-ci-bot ebc3f12b21 !620 [Auto parallel] Fix the code-style warnings in parallel-mode 5 years ago
  mindspore-ci-bot 54e0fa5c09 !556 [Auto Parallel] use DeviceMemory instead of fixed-size memory check 5 years ago
  Xiaoda Zhang ec043fcd56 fix the codex and bot warnings 5 years ago
  ch-l f806b72447 use DeviceMemory for memory control 6 years ago
  zhousiyi f6a4f3d155 [static_analysis]: remove the TrivialPrimEvaluator cache. 6 years ago
  lichenever c78630d737 support multiple subgraphs 6 years ago
  zhoufeng c2b3360d69 update clang format rule 6 years ago
  mindspore-ci-bot 46acf23825 !405 [AutoParallel] Adapte rec-prog generator to new parser 6 years ago
  mindspore-ci-bot 5b6b1ad727 !394 [AutoParallel] Simplify rec-prog parser mechanism 6 years ago
  ch-l c71234f383 improve rec-prog str generator 6 years ago
  mindspore-ci-bot 0cdb1171d5 !87 Take AllToAll as a virtual operator in cost model 6 years ago
  Ziyan 0d208e00bd Model ALLTOALL as a single operator in cost model; scale the ALLTOALL, 6 years ago
  Chong b1f5e44cd4 improve parser 6 years ago
  ougongchang 0ed6d9178e add Histogram summary operator 6 years ago
  mindspore-ci-bot 6e183fcc0f !385 [Auto parallel] Adjusting backward phase communication cost of some operators 6 years ago
  Xiaoda Zhang 79de8f4bdf Adjusting backward communication cost of some operators 6 years ago
  yangzhenzhang 36ffb66782 add parallel op for square 6 years ago
  yangzhenzhang 57cd9f8188 add parallel op for sigmoidloss 6 years ago
  yangzhenzhang 6d522f0a4f add parallel op for layernorm 6 years ago
  mindspore-ci-bot b2b3e24a8e !329 [MS]support building on windows 10 6 years ago
  chenjianping 1286767d0e support building on windows 6 years ago
  mindspore-ci-bot 2961c6bc59 !349 fix coding style check warning for auto parallel 6 years ago
  c00425699 8765810528 fix_coding_style_check_warning 6 years ago
  Xiaoda Zhang ffb2cb03a4 Change 'NOT_FULLY_USE_DEVICES' to 'FULLY_USE_DEVICES' and make ALL-1 user-specified-strategy valid in auto-parallel 6 years ago
  Xiaoda Zhang 0ac50a19f5 Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost. 6 years ago
  mindspore-ci-bot 39a46b9342 !245 Add bool type check in communication operator 6 years ago
  mindspore-ci-bot 77725e81a4 !258 add_minimum_distributed_op 6 years ago
  Wei Luning 2fecdede6b support amp when model eval, fix example of UnsortSegmentsSum 6 years ago
  c00425699 d62f560b50 add_bool_type_check_in_comm_op 6 years ago
  lichenever b81cc6ea4f add minimum distributed op 6 years ago
  mindspore-ci-bot 7bc2cee318 !167 add_squeeze_distributed_op 6 years ago
  c00425699 c8cdb6b331 support distributed GatherV2 operator 6 years ago
  buxue 5841fe010e Support pow's second input could be tensor and fix bug in bprop of pow 6 years ago
  lichenever 32cd280c1a add squeeze distributed op 6 years ago
  mindspore-ci-bot 5141054ecd !220 Add parallel operator for DropoutDoMask 6 years ago
  chenzomi e09f220f17 fix complite bug in clang 6 years ago