20 Commits (a146f982bc50db00b1716ac4cd85da57d6a5ca25)

Author SHA1 Message Date
  yangzhenzhang 0c2c76d037 update get rank in parallel ops 5 years ago
  Yi Huaijie d7faa77b5e support int64 shape 5 years ago
  mindspore-ci-bot 8bfe141680 !7571 fix bug in reshape strategy search when reshape as the first operator 5 years ago
  yao_yf f7189adb91 fix bug in reshape strategy search when reshape is first operator 5 years ago
  yangzhenzhang eb6f4e3ce8 update repeated calculation 5 years ago
  yangzhenzhang fc4ed975c4 handle repeated calculation 5 years ago
  Yi Huaijie 518cb80133 change type of Shape from int32 to int64 5 years ago
  suteng 19e45ccdb1 回退 'Pull Request !3103 : change type of Shape from int32 to int64' 5 years ago
  Yi Huaijie 15d5cc396d change type of Shape from int32 to int64 5 years ago
  liubuyu 43c79eb853 mindspore path adjust 5 years ago
  yao_yf f0bf438a55 reshape strategy search 5 years ago
  Xiaoda Zhang 0ac50a19f5 Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost. 5 years ago
  buxue 5841fe010e Support pow's second input could be tensor and fix bug in bprop of pow 5 years ago
  yangzhenzhang b34c0e7a17 add parallel op for dropoutdomask 5 years ago
  c00425699 b413638f23 refactor OperatorCostPtr in OperatorInfo 5 years ago
  mindspore-ci-bot 2e6e94b2b6 !177 prelu operator support parallel on the channel 5 years ago
  yao_yf b5e3fa9593 fix auto parallel prelu 5 years ago
  Xiaoda Zhang a153fad874 This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed. 5 years ago
  c00425699 3bb48ffee1 use std::vector instead of std::list to promote performance for parallel module 5 years ago
  zhunaipan 930a1fb0a8 initial version 5 years ago