885 Commits (a146f982bc50db00b1716ac4cd85da57d6a5ca25)

Author SHA1 Message Date
  mindspore-ci-bot f98efafa16 !317 [IRFusion] add derelu_fusion pass 5 years ago
  mindspore-ci-bot cf026096a6 !183 Mindspore.dataset CPP sampler for GeneratorDataset 5 years ago
  wenchunjiang ee5f3fa240 add pass to insert memcpy_async for get_next outputs 5 years ago
  mindspore-ci-bot 58a70b5f82 !346 getnext parallel optimization part II: Eliminate Memcpy in specify scenario 5 years ago
  laiyongqiang 3e05f50f5f getnext_memcpy_elimination 5 years ago
  yangzhenzhang 6d522f0a4f add parallel op for layernorm 5 years ago
  huanghui b02e871c1a [IRFusion] add derelu_fusion pass 5 years ago
  Junhan Hu 9739d3b048 Add CPP sampler support for GeneratorDataset 5 years ago
  mindspore-ci-bot 30de261c3c !243 Support nested repeat 5 years ago
  hesham 0fc23eee0f Support nested repeat 5 years ago
  mindspore-ci-bot b571fabd77 !289 Add cnode mapping after graph match 5 years ago
  YuJianfeng e5c67b9088 Add cnode to equal map when opt matching 5 years ago
  mindspore-ci-bot 9bda080bb5 !260 refactor padding strategy 5 years ago
  mindspore-ci-bot 1ab430072e !232 [Auto parallel] Model the memory_cost in cost model 5 years ago
  mindspore-ci-bot 94589ce611 !226 expend conv stride and dilation to 2d 5 years ago
  wangnan39@huawei.com 2604acedcb extend conv stride and dilation to 2d 5 years ago
  lianliguang 5d225f934f change the padding strategy & refactor insert transdata 5 years ago
  Xiaoda Zhang 0ac50a19f5 Model the memory cost in auto-parallel. It is calculated by the output of operators, plus the parameters. Additionally, modify the graph-operations in auto_parallel to include memory_cost. 5 years ago
  mindspore-ci-bot f1fa2a9941 !273 [MD] update subset random sampler in minddataset 5 years ago
  mindspore-ci-bot 298a32f1e2 !132 Add confusion_mul_grad_fusion pass 5 years ago
  liyong 0ce83e39e1 fix TestShardSampleWrongNumber 5 years ago
  mindspore-ci-bot 57f953ca38 !216 Implement addn fusion pass 5 years ago
  huanghui 19ee376cd3 add confusion_mul_grad fusion pass 5 years ago
  c00425699 d62f560b50 add_bool_type_check_in_comm_op 5 years ago
  YuJianfeng 7307c81f31 implement AddN fission pass 5 years ago
  panfengfeng 6a79fc1735 skip mindrecord ut test case 5 years ago
  buxue 5841fe010e Support pow's second input could be tensor and fix bug in bprop of pow 5 years ago
  yangzhenzhang b34c0e7a17 add parallel op for dropoutdomask 5 years ago
  jonyguo a9443635b7 fix: mindpage enhance parameter check and search by filename failed 5 years ago
  panfengfeng 53a98210af skip ut test cases temporarily 5 years ago
  c00425699 406475160f refactor OperatorCostPtr in OperatorInfo 5 years ago
  biffex cc1416bfc2 constant duplicate mul for momentum 5 years ago
  kswang a6747c522f add ascend mem pool 5 years ago
  yao_yf 513f384c43 fix auto parallel prelu 5 years ago
  kswang d84cfb0108 add mem manager 5 years ago
  Alexey Shevlyakov 6d1ea7af8e remove make_unique.h 5 years ago
  jonyguo 6690a7fd7a fix: error info is not exactly when column list invalid 5 years ago
  jojobugfree 89f0b3b1bb profiling feature enhancement 5 years ago
  Xiaoda Zhang 7798c85e70 This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed. 5 years ago
  Jonathan Yan f01098bc12 remove ENABLE_MINDRECORD flag 5 years ago
  Alexey Shevlyakov b9701db887 fix RandomCropDecodeResize test 5 years ago
  mindspore-ci-bot 268d358a1d !187 refactor OperatorCostPtr in OperatorInfo 5 years ago
  mindspore-ci-bot 1b3b3b1a1c !198 [opt] momentum duplicate mul constant 5 years ago
  biffex 62bbf560c6 constant duplicate mul for momentum 5 years ago
  c00425699 b413638f23 refactor OperatorCostPtr in OperatorInfo 5 years ago
  kswang bef62db128 add ascend mem pool 5 years ago
  mindspore-ci-bot 2e6e94b2b6 !177 prelu operator support parallel on the channel 5 years ago
  mindspore-ci-bot 31efc8b088 !172 add mem manager 5 years ago
  kswang fb343bd607 add mem manager 5 years ago
  buxue 1d3bb0b731 Develop op MaxPoolWithArgMax 5 years ago