54 Commits (bfc3065fc79cd206a134cb32f18fc3449352d441)

Author SHA1 Message Date
  mindspore-ci-bot bfc3065fc7 !3025 [AutoParallel]Add embedding look up op 5 years ago
  lichenever cde5cc2bd2 add_embedding_look_up 5 years ago
  panyifeng 44e74ad5aa Apply indexed_slices 5 years ago
  He Wei f337c6bc14 Decouple ParamValue from python 5 years ago
  yao_yf 37338813f0 skip strategy ckpt save for reshape 5 years ago
  lirongzhen1 40251d9578 configure auto parallel tensors shape 5 years ago
  gong chen a6dfa281ea Init GraphKernel. 5 years ago
  lichenever 563622874a update 5 years ago
  Yi Huaijie 7857d59c82 dropout do mask only replace first input of dropout_gen_mask of the subgraph instead of the whole sub graph 5 years ago
  suteng da586a6177 回退 'Pull Request !2078 : replace first input of dropout_gen_mask of the subgraph instead of the whole sub graph' 5 years ago
  Yi Huaijie 6c85fc9f9f dropout do mask only replace first input of 5 years ago
  lichenever e0e055a0b8 add sparse gatherv2 5 years ago
  Yi Huaijie e5c351690b support load full dataset on each device 5 years ago
  lichenever 1437966c98 gatherv2_support_host_and_device 5 years ago
  yangzhenzhang 1413f520d7 support reshape optimized 5 years ago
  leopz 4508134ceb add tensor_minnie and separate py from ir 5 years ago
  leopz 40e15996b0 move default_param out of parameter and remove pybind11 in anf define 5 years ago
  mindspore-ci-bot a6546b80ba !1147 INFO user when set_strategy not under [semi_]auto_parallel mode 5 years ago
  Yi Huaijie 75ca84d260 INFO user when set_strategy not under [semi_]auto_parallel mode 5 years ago
  lichenever debfd38b75 fix gatherv2 and dataset bug 5 years ago
  lichenever 19a24b86ac add gatherv2 distributed op 5 years ago
  yangzhenzhang 8c9730b3c5 add parallel mode for cell 5 years ago
  yao_yf 5a6540450e use param name as the key of strategy checkpoint 5 years ago
  yao_yf 6cde5f6d91 auto parallel strategy checkpoint 5 years ago
  yangzhenzhang 36a62576e8 support forward graph 5 years ago
  mindspore-ci-bot 84d5e4f923 !643 [AutoParallel]Support reshape parameter 5 years ago
  lichenever 2ab211ae04 support reshape parameter 5 years ago
  mindspore-ci-bot 53d2da5fe4 !264 static_analysis: remove useless cache in TrivialPrimEvaluator and add cache for PythonPrimEvaluator 5 years ago
  mindspore-ci-bot ebc3f12b21 !620 [Auto parallel] Fix the code-style warnings in parallel-mode 5 years ago
  mindspore-ci-bot 54e0fa5c09 !556 [Auto Parallel] use DeviceMemory instead of fixed-size memory check 5 years ago
  Xiaoda Zhang ec043fcd56 fix the codex and bot warnings 5 years ago
  ch-l f806b72447 use DeviceMemory for memory control 5 years ago
  zhousiyi f6a4f3d155 [static_analysis]: remove the TrivialPrimEvaluator cache. 5 years ago
  lichenever c78630d737 support multiple subgraphs 5 years ago
  zhoufeng c2b3360d69 update clang format rule 5 years ago
  c00425699 8765810528 fix_coding_style_check_warning 5 years ago
  mindspore-ci-bot 7bc2cee318 !167 add_squeeze_distributed_op 5 years ago
  c00425699 c8cdb6b331 support distributed GatherV2 operator 5 years ago
  lichenever 32cd280c1a add squeeze distributed op 5 years ago
  yangzhenzhang b34c0e7a17 add parallel op for dropoutdomask 5 years ago
  lichenever ff808021c7 register not equal distributed op 5 years ago
  mindspore-ci-bot d8b460c780 !96 fix refkey bug for auto parallel 5 years ago
  lichenever 5240b1f603 fix refkey bug for auto parallel 5 years ago
  Su Teng 60b68a1470 sort include file in parallel dir 5 years ago
  mindspore-ci-bot 87040483ee !58 fix two cast bug in auto parallel 5 years ago
  mindspore-ci-bot da4c711dfb !50 fix parallel related valuenode merging error 5 years ago
  panyifeng feb1c36811 fix parallel related valuenode merging error 5 years ago
  lichenever 2da38ad401 fix two cast bug in auto parallel 5 years ago
  mindspore-ci-bot da447b8d4d !45 use std::vector instead of std::list to promote performance for parallel module 5 years ago
  lichenever f946aea10d fix grpah mode loop sink bug in auto parallel 5 years ago