480 Commits (7d0bee5ba18dc6663e1f285652dbc19e0f2d4ffc)

Author SHA1 Message Date
  yangzhenzhang 4d0b1a47ee support config group for batchnorm 4 years ago
  yangzhenzhang 000c5b5394 support dilation for conv2d 4 years ago
  i-robot 7bb5819889
!29265 fix resize_bilinear infer 4 years ago
  jiangzhenguang abddc993ea fix resize_bilinear 4 years ago
  yangzhenzhang 5514189257 support group for conv2d 4 years ago
  yangzhenzhang 6dd7333c0b fix bug for conv2d transpose 4 years ago
  yangzhenzhang a9d2e255f5 support single direction exchange for conv2d 4 years ago
  i-robot 017cb5f3ad
!27980 auto insert VirtualDataset node for master 4 years ago
  yangzhenzhang e5df74e9e4 compute top bottom overlap for conv2d 4 years ago
  Xiaoda Zhang 6d8320fa66 1) fix the exact division in moe; 4 years ago
  Xiaoda Zhang 1bdb610b34 changing default value of single-loop flag 4 years ago
  i-robot dd90a56d68 !28073 fix code warning && remove save_graphs use in st/ut 4 years ago
  i-robot 22c25ec10e !27862 [Auto parallel] [Sharding propagation] dealing with cast 4 years ago
  huanghui 74ca50e652 fix code warning && remove save_graphs use in st/ut 4 years ago
  Xiaoda Zhang 66c7474e5a remove CastInfo from CNODE 4 years ago
  zhuyuxiao dd7bbf92dd change API 4 years ago
  lilei 017aa359a6 insert VirtualDataset node for master 4 years ago
  i-robot 2fbec9a554 !27856 use neighbor-exchange-v2 for conv2d 4 years ago
  yangzhenzhang 8a68577756 use neighbor-exchange-v2 for conv2d 4 years ago
  wzw a9b78682d5 parallel ut refactor 3 4 years ago
  yangzhenzhang 5f6477b022 add output strategy for gather op 4 years ago
  i-robot d49f5e6caf !27525 support optimizer parallel for adafactor 4 years ago
  yao_yf 30576c6a75 fix reshape bool type in auto parallel 4 years ago
  yangzhenzhang 2a0b528084 support opt parallel for adafactor 4 years ago
  i-robot 938dc8abd0 !27439 [Auto parallel] Add new operatorInfo for Parallel: CumSum 4 years ago
  i-robot 0e358f4cb3 !27428 revert insert VirtualDataset node for master 4 years ago
  Xiaoda Zhang 8042c88223 add the new operatorInfo for parallel: CumSum 4 years ago
  lilei 2edf6ab33b revert insert VirtualDataset node for master 4 years ago
  i-robot faaec746f7 !27401 add more ut tests for allreduce fusion 4 years ago
  jiahongQian b03c8d18d3 add more ut tests 4 years ago
  i-robot ffca7b08a5 !27237 auto insert VirtualDataset node for master 4 years ago
  i-robot f40668ef73 !27251 test_micro_batch_Interleaved 4 years ago
  lilei 05189459ab auto insert VirtualDataset node for master 4 years ago
  lilei e933aa268b test_micro_batch_Interleaved 4 years ago
  i-robot 2d23b698a6 !27024 add allreduce fusion by size 4 years ago
  q00596439 de36fdc169 add allreduce fusion size and unify the interface 4 years ago
  huangxinjing 8c9b2b93a8 Add transformer 4 years ago
  yangzhenzhang 7454b8f8f2 check args for shard 4 years ago
  Xiaoda Zhang 364858cbc9 In sharding propagation, to keep strategy consistent of parameter being used by multiple operators, we check the edge with one node of TmpIdentityInfo 4 years ago
  Xiaoda Zhang 04db51a528 In a previous PR (https://gitee.com/mindspore/mindspore/pulls/26807/), we replaced 'auto_parallel_search_mode' by 'search_mode' directly. 4 years ago
  i-robot 9f8ec2c5ab !26807 [Auto parallel] [Sharding propagation] Interface change of sharding propagation 4 years ago
  i-robot 6ecbc97fd6 !26804 virtual_dataset_avoid_auto_parallel 4 years ago
  i-robot b282414de7 !26619 arallel_ut_refactoring 4 years ago
  Xiaoda Zhang ad5ac77ae8 1) 'auto_parallel_search_mode' changes to 'search_mode'; 4 years ago
  yao_yf f29ce1fb60 virtual dataset avoid auto parallel 4 years ago
  i-robot 519f14a909 !26006 slice recompute activation 4 years ago
  wzw 86c5ad20c8 parallel_ut_refactoring1 4 years ago
  i-robot 1b8c2ff0e9 !26414 fault_recover_by_mirror_group_fix_opt_shard 4 years ago
  yao_yf 188d39da83 slice_activation_in_recompute 4 years ago
  yao_yf 01dc4bbdf9 fix fault recover in optimizer shard 4 years ago