44 Commits (117b35eb49210f076ecb7bae6e7bbd4ba498af2d)

Author SHA1 Message Date
  l00591931 cf7c5840e3 Change return 4 years ago
  jinyaohui 30a27b2adb modify Gelu、FastGelu to GeLU and FastGeLU 4 years ago
  mindspore-ci-bot ad5b033cc5 Change L2Norm, r1.1 to master 4 years ago
  l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master 4 years ago
  yangzhenzhang 38ea8784c6 update infer mirror ops 4 years ago
  Xiaoda Zhang 3c27c08b46 change memory cost calculation in auto-parallel 4 years ago
  yangzhenzhang 0c2c76d037 update get rank in parallel ops 5 years ago
  yangzhenzhang 41d925b68a set stage id 5 years ago
  Yi Huaijie d7faa77b5e support int64 shape 5 years ago
  mindspore-ci-bot 8bfe141680 !7571 fix bug in reshape strategy search when reshape as the first operator 5 years ago
  yao_yf f7189adb91 fix bug in reshape strategy search when reshape is first operator 5 years ago
  yangzhenzhang eb6f4e3ce8 update repeated calculation 5 years ago
  yangzhenzhang fc4ed975c4 handle repeated calculation 5 years ago
  huanghui b7519b7418 unify save_graphs_path 5 years ago
  Ziyan ddc0113058 enable parallel optimizer in auto parallel 5 years ago
  huangxinjing 4ef439e27b Add stage information for ops and strategy 5 years ago
  zhousiyi c25e37e7bf make backend/optimizer pybind free 5 years ago
  Yi Huaijie 80bdcab982 temporarily cast between int64 and int32 to wait ME support int64 5 years ago
  Yi Huaijie 518cb80133 change type of Shape from int32 to int64 5 years ago
  suteng 19e45ccdb1 回退 'Pull Request !3103 : change type of Shape from int32 to int64' 5 years ago
  Yi Huaijie 15d5cc396d change type of Shape from int32 to int64 5 years ago
  He Wei 4eb81d7934 Rename AnfNode::user_data related functions to follow naming rule 5 years ago
  yangzhenzhang e6cef98e95 delete useless code for allreduce 5 years ago
  He Wei 32379f3e7a Decouple ir from frontend 5 years ago
  WilliamLian 50e2fda52d refactor primitive ComputeFunction function 5 years ago
  liubuyu 43c79eb853 mindspore path adjust 5 years ago
  Ziyan 0925e35252 enable optimizer parallel with broadcast 5 years ago
  He Wei 43e0967024 Decouple ir::Tensor class from python 5 years ago
  Xiaoda Zhang 9f4b8a3cd1 changing the succive edges order in GetAliveSuccEdges() so that Triangle and Star Elimination can be merged into particular node; adding some check information 5 years ago
  leopz 40e15996b0 move default_param out of parameter and remove pybind11 in anf define 5 years ago
  yao_yf f0bf438a55 reshape strategy search 5 years ago
  Xiaoda Zhang def8573275 implementing-searching-strategy-for-inference 5 years ago
  ch-l f806b72447 use DeviceMemory for memory control 5 years ago
  yangzhenzhang 6d522f0a4f add parallel op for layernorm 5 years ago
  Xiaoda Zhang 0ac50a19f5 Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost. 5 years ago
  c00425699 d62f560b50 add_bool_type_check_in_comm_op 5 years ago
  buxue 5841fe010e Support pow's second input could be tensor and fix bug in bprop of pow 5 years ago
  yangzhenzhang b34c0e7a17 add parallel op for dropoutdomask 5 years ago
  c00425699 b413638f23 refactor OperatorCostPtr in OperatorInfo 5 years ago
  mindspore-ci-bot 2e6e94b2b6 !177 prelu operator support parallel on the channel 5 years ago
  yao_yf b5e3fa9593 fix auto parallel prelu 5 years ago
  Xiaoda Zhang a153fad874 This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed. 5 years ago
  c00425699 3bb48ffee1 use std::vector instead of std::list to promote performance for parallel module 5 years ago
  zhunaipan 930a1fb0a8 initial version 5 years ago