53 Commits (bc738d4ec3e73f4e25a3631f71b9833fe6830e47)

Author SHA1 Message Date
  mindspore-ci-bot 8bfe141680 !7571 fix bug in reshape strategy search when reshape as the first operator 5 years ago
  yao_yf f7189adb91 fix bug in reshape strategy search when reshape is first operator 5 years ago
  yangzhenzhang eb6f4e3ce8 update repeated calculation 5 years ago
  mindspore-ci-bot 36a8013b0a !7493 add element-wise operators 5 years ago
  Yi Huaijie fe1b06c659 add splitable operators 5 years ago
  yao_yf 4f2042acd1 move reshape_tensor_info_vector reset position 5 years ago
  yangzhenzhang fc4ed975c4 handle repeated calculation 5 years ago
  yao_yf 4c1d4924cb fix reshape strategy search bug in auto parallel 5 years ago
  yao_yf f60d81a15f support reshape redistribution in all scenes 5 years ago
  mindspore-ci-bot fd0c03c493 !7090 implement parallel BroadcastTo 5 years ago
  Yi Huaijie 45d373d40e implement parallel BroadcastTo 5 years ago
  Ziyan ddc0113058 enable parallel optimizer in auto parallel 5 years ago
  huangxinjing 4ef439e27b Add stage information for ops and strategy 5 years ago
  Yi Huaijie 6066b16838 implement parallel Pack 5 years ago
  Yi Huaijie 18ed2bec53 implement parallel Split 5 years ago
  Ziyan 9e5248497b add batch parallel info black list 5 years ago
  Yi Huaijie 0d478130f6 fix code check error 5 years ago
  lilei 71adabd944 modify_bug 5 years ago
  yao_yf d4cfe55c04 rename mirror_mean to gradients_mean 5 years ago
  lichenever d22f506431 add BatchNormEx op 5 years ago
  yangzhenzhang 048b88c41c update check strategy value 5 years ago
  mindspore-ci-bot 9bc470310e !5475 Delete is_auto_parallel in parallel operators 5 years ago
  yangzhenzhang afb0993902 delete is_auto_parallel 5 years ago
  Yi Huaijie 84948ca730 parallel supports more elementary-wise operators 5 years ago
  zhousiyi d0e58dd765 remove ccsrc/common.h 5 years ago
  mindspore-ci-bot 9ee144ea40 !4744 [AutoParallel]Support bert 5 years ago
  lichenever 221a801395 auto parallel support bert 5 years ago
  yangzhenzhang cda08f6a52 concat 3 tensors in auto parallel mode 5 years ago
  zhoufeng 663278112f optimize code compile performance 5 years ago
  yangzhenzhang 14c77c9f03 update field split 5 years ago
  yangzhenzhang 4a0e6ff7fc update field split 5 years ago
  yangzhenzhang f4bb43bbaf add concat op 5 years ago
  yao_yf 60a9fb0001 add_tensor_layout_in_stra_ckpt 5 years ago
  mindspore-ci-bot 617b98f104 !3966 [AutoParallel]Add dropout distributed op 5 years ago
  lichenever bfc96de1b9 add dropout distributed op 5 years ago
  liubuyu d81862a916 decoupling core and context 5 years ago
  Yi Huaijie 80bdcab982 temporarily cast between int64 and int32 to wait ME support int64 5 years ago
  Yi Huaijie 518cb80133 change type of Shape from int32 to int64 5 years ago
  suteng 19e45ccdb1 回退 'Pull Request !3103 : change type of Shape from int32 to int64' 5 years ago
  Yi Huaijie 15d5cc396d change type of Shape from int32 to int64 5 years ago
  mindspore-ci-bot 8ff7c0b640 !3859 embeddinglookup support cpu in auto parallel 5 years ago
  yao_yf c853c4d231 embeddinglookup support host_device in auto parallel 5 years ago
  Xiaoda Zhang d24a902afe add a new graph operation in autoparallel 5 years ago
  yangzhenzhang 9aa84b3d14 add strided slice op 5 years ago
  yangzhenzhang 6a6e2bd271 add tile op 5 years ago
  lirongzhen1 5d63c60135 add sparse feature test cases for auto parallel 5 years ago
  jjfeing 0d5f1fba60 check tbe kernel property in kernel select process 5 years ago
  mindspore-ci-bot a2bf5a322e !3129 Decouple ir from frontend 5 years ago
  He Wei 32379f3e7a Decouple ir from frontend 5 years ago
  yao_yf 3dbe872596 modezoo wide&deep run clusters 5 years ago