191 Commits (bc738d4ec3e73f4e25a3631f71b9833fe6830e47)

Author SHA1 Message Date
  yao_yf b70204c080 auto parallel context add notes and func mv 5 years ago
  mindspore-ci-bot d8d2a70cb3 !6344 [AutoParallel]fix auto parallel multigraph bug 5 years ago
  Ziyan 9e5248497b add batch parallel info black list 5 years ago
  ZPaC 58d6406178 Add pointer check for PS 5 years ago
  lichenever 6b2a9de09f fix auto parallel mutigrpah bug 5 years ago
  ZPaC bb0a5a30cd Capsulate address ptr generation in PS. 5 years ago
  mindspore-ci-bot 546b5a23f9 !6126 [AutoParallel] Added dimension check for elementwise op with implicit broadcast 5 years ago
  mindspore-ci-bot 91b2591b08 !6079 1.Fix codex of PS module. 2.Use std::vector instead of std::shared_ptr. 3.Optimize some code. 5 years ago
  Sheng fcbd85d79d check dim for refer tensor when broadcasting 5 years ago
  ZPaC 4c1f9983d7 Optimize PS code. 5 years ago
  Yi Huaijie 0d478130f6 fix code check error 5 years ago
  lilei 71adabd944 modify_bug 5 years ago
  ZPaC 7002939540 1.Fix error when pserver finishes training. 5 years ago
  ZPaC 87bf2a7dcd Add PS context. 5 years ago
  mindspore-ci-bot 6d9501d5ed !5781 add exector group operation 5 years ago
  kswang ebff566a07 add group operation for executor 5 years ago
  yao_yf d4cfe55c04 rename mirror_mean to gradients_mean 5 years ago
  mindspore-ci-bot c064c01b6b !5729 [AutoParallel]Add FuseBatchNormEx op 5 years ago
  mindspore-ci-bot 7786adc3aa !5722 fix semi auto parallel parameter of reshape has another user 5 years ago
  lichenever d22f506431 add BatchNormEx op 5 years ago
  yao_yf 05c003ae6b origin/semi_auto_parallel_reshape_parameter_has_another_user 5 years ago
  ZPaC 997304e2c5 Unify float to int cast. 5 years ago
  ZPaC 69d527050f Clean codex: use std::vector instead of new[] 5 years ago
  mindspore-ci-bot ccc0ea60ee !5661 fix auto parallel reshape strategy set when it is first operator 5 years ago
  mindspore-ci-bot 847fadc8b9 !5516 auto parallel context modify 5 years ago
  yao_yf 755f381406 fix auto parallel reshape strategy set when it is first operator 5 years ago
  mindspore-ci-bot 79b117fe02 !5582 update CheckStrategyValue 5 years ago
  mindspore-ci-bot d92c220cc0 !5590 Fixbugfix for server shard range computation 5 years ago
  yao_yf 8f7aa5bd5a auto parallel context modify 5 years ago
  cristoval be63f8b8a2 bugfix for server shard range computation 5 years ago
  yangzhenzhang 048b88c41c update check strategy value 5 years ago
  ZPaC b0b3cd61bf Fix sparse-slicer leak. 5 years ago
  ZPaC 442b38dc20 Delete extra file 5 years ago
  fary86 fcbb3e0edc Refactor ms_context implementation 5 years ago
  mindspore-ci-bot d81b30e6a0 !5312 make backend/optimizer free of pybind 5 years ago
  mindspore-ci-bot 9bc470310e !5475 Delete is_auto_parallel in parallel operators 5 years ago
  mindspore-ci-bot be606ba8f5 !5432 Mindspore parallel supports all elementary-wise operators 5 years ago
  yangzhenzhang afb0993902 delete is_auto_parallel 5 years ago
  Yi Huaijie 84948ca730 parallel supports more elementary-wise operators 5 years ago
  zhousiyi c25e37e7bf make backend/optimizer pybind free 5 years ago
  mindspore-ci-bot 414184c184 !5367 Check the parameter's split strategies if it has multiple users 5 years ago
  cristoval 505108633c combine sparse embedding gradient 5 years ago
  yangzhenzhang fbda03bbcc check parameter split 5 years ago
  mindspore-ci-bot 3725062582 !5229 [AutoParallel]Fix CodeDex 5 years ago
  lichenever 49aa4b7686 fix codedex 5 years ago
  mindspore-ci-bot 8eff6c96b4 !5045 merge slice parameter checkpoints in wide&deep 5 years ago
  mindspore-ci-bot 5b1cf18cb9 !5055 prepare to support int64 5 years ago
  Wei Luning 24a10225cf change base class of ref to tensor in cpp 5 years ago
  yao_yf eeede168fa wide_and_deep merge ckpt in eval 5 years ago
  lirongzhen1 531ad4df70 prepare to support int64 5 years ago