53 Commits (5cccfbc61ba4e67de63eecacd564373b7ddb0e3a)

Author SHA1 Message Date
  Xiaoda Zhang 3ff6e336c6 check cast from optimizer in auto-parallel 5 years ago
  gong chen a6dfa281ea Init GraphKernel. 5 years ago
  lichenever e0e055a0b8 add sparse gatherv2 5 years ago
  mindspore-ci-bot c1c683eea8 !1938 [AutoParallel] limit GatherV2, BN and Softmax to data parallel 5 years ago
  hongxing bee57fda66 support GatherV2 + Depend 5 years ago
  mindspore-ci-bot f6b5b2732f !1892 [AutoParallel] limit partition dimension to adapt new HCCL's constrait 5 years ago
  hongxing 158495d43a hccl patch + update ConstructNodes + support Softmax 5 years ago
  mindspore-ci-bot 4a29f2733b !1743 [AutoParallel] take acount of tuple input to prevent sub-graph isolation 5 years ago
  hongxing 2031710d95 fix bug and optimize code 5 years ago
  leopz 4508134ceb add tensor_minnie and separate py from ir 5 years ago
  yangzhenzhang 7c237620ba add sigmoid op 5 years ago
  leopz 40e15996b0 move default_param out of parameter and remove pybind11 in anf define 5 years ago
  hongxing 8f04adf1c3 feature : eliminate graph 5 years ago
  Xiaoda Zhang a05aa21cc2 calculating PEAK memory cost in the inference phase 5 years ago
  mindspore-ci-bot 95d4665db9 !1051 auto parallel reshape reconstruct 5 years ago
  mindspore-ci-bot 2dd97a0780 !1028 [AutoParallel] refined memory check & added odd num tensor shape support 5 years ago
  yao_yf 716def7c0a move InferStraByTensorInfo to tensor_info.h 5 years ago
  yao_yf f0bf438a55 reshape strategy search 5 years ago
  sheng 59bb014144 refine mem ctl & odd num ctl & fuse str writeback 5 years ago
  yangzhenzhang 8c9730b3c5 add parallel mode for cell 5 years ago
  yao_yf 5a6540450e use param name as the key of strategy checkpoint 5 years ago
  yao_yf 6cde5f6d91 auto parallel strategy checkpoint 5 years ago
  Xiaoda Zhang e227415673 support-the-multiple-subgraphs-in-the-ANF 5 years ago
  ch-l f806b72447 use DeviceMemory for memory control 5 years ago
  mindspore-ci-bot 46acf23825 !405 [AutoParallel] Adapte rec-prog generator to new parser 5 years ago
  mindspore-ci-bot 5b6b1ad727 !394 [AutoParallel] Simplify rec-prog parser mechanism 5 years ago
  ch-l c71234f383 improve rec-prog str generator 5 years ago
  Chong b1f5e44cd4 improve parser 6 years ago
  yangzhenzhang 36ffb66782 add parallel op for square 6 years ago
  yangzhenzhang 57cd9f8188 add parallel op for sigmoidloss 6 years ago
  yangzhenzhang 6d522f0a4f add parallel op for layernorm 6 years ago
  mindspore-ci-bot b2b3e24a8e !329 [MS]support building on windows 10 6 years ago
  chenjianping 1286767d0e support building on windows 6 years ago
  mindspore-ci-bot 2961c6bc59 !349 fix coding style check warning for auto parallel 6 years ago
  c00425699 8765810528 fix_coding_style_check_warning 6 years ago
  Xiaoda Zhang ffb2cb03a4 Change 'NOT_FULLY_USE_DEVICES' to 'FULLY_USE_DEVICES' and make ALL-1 user-specified-strategy valid in auto-parallel 6 years ago
  Xiaoda Zhang 0ac50a19f5 Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost. 6 years ago
  mindspore-ci-bot 39a46b9342 !245 Add bool type check in communication operator 6 years ago
  mindspore-ci-bot 77725e81a4 !258 add_minimum_distributed_op 6 years ago
  Wei Luning 2fecdede6b support amp when model eval, fix example of UnsortSegmentsSum 6 years ago
  c00425699 d62f560b50 add_bool_type_check_in_comm_op 6 years ago
  lichenever b81cc6ea4f add minimum distributed op 6 years ago
  yangzhenzhang b34c0e7a17 add parallel op for dropoutdomask 6 years ago
  Xiaoda Zhang a153fad874 This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed. 6 years ago
  yangzhenzhang dd0d4e6b84 add parallel ops for expand dims 6 years ago
  lichenever ff808021c7 register not equal distributed op 6 years ago
  Su Teng 60b68a1470 sort include file in parallel dir 6 years ago
  mindspore-ci-bot 22a9c00bcd !57 Add parallel operators for Neg and BatchMatMul 6 years ago
  yangzhenzhang 110640e2ad add parallel ops for neg and batchmatmul 6 years ago
  lichenever 2da38ad401 fix two cast bug in auto parallel 6 years ago