5326 次代码提交 (be2cfa9ed65c7c28f39683f74d84a4835ec89adc)
 

作者 SHA1 备注 提交日期
  Jonathan Yan 295c00ac39 Replace std::cout with MS_LOG in dataset unit test 5 年前
  jonyguo ef1caad11f 1. add more log info for dataset & mindrecord, 2. add two new testcase for MindDataset 5 年前
  seatea 1012790a33 Fix dtype bug for loss_scale and weight_decay. 5 年前
  zhaozhenlong 04e4f6d927 adapt relu6grad and graphengine modified 5 年前
  yoonlee666 02ebf03c6e add bert script to master 5 年前
  seatea 56ab3a1df3 Check if the shape of the input of NMSWithMask is (N, 5). 5 年前
  mindspore-ci-bot b6d74f862e !178 fix compile error 5 年前
  jinyaohui ac62faa388 modify set_dataset_mode_config api param 5 年前
  guohongzilong e17e086186 unified tensor and mindspore.type 5 年前
  simson a8bc8bfecb fix compile error 5 年前
  liuxiao 47d903ff57 Add pack and unpack 5 年前
  mindspore-ci-bot dd9a5a385a !175 Rebuild graph before RunGraph if needed 5 年前
  mindspore-ci-bot 1a4d364bf3 !7 Add ascend st lenet script for pynative mode 5 年前
  mindspore-ci-bot 2eb71103f9 !82 profiling feature enhancement 5 年前
  simson ee5b406b37 rebuild graph before rungraph if needed 5 年前
  mindspore-ci-bot 22516d3e08 !156 [Auto parallel] Separate memory_cost and computation_cost in cost model 5 年前
  jonyguo 20d1b64443 fix: error info is not exactly when column list invalid 5 年前
  mindspore-ci-bot 606310d9c3 !166 Enable auto-mixed-precision in GeInitialize 5 年前
  lvliang 2da8570a01 pynative-add-lenet 5 年前
  jojobugfree effdb483d6 profiling feature enhancement 5 年前
  anzhengqi fb4e84c0ee modify part of comments 5 年前
  mindspore-ci-bot e6ea09082c !153 add api image_gradients 5 年前
  yoonlee666 c5bfbc3556 use TFRecordDataset in bert ci script and add absolute position embedding code in bert model 5 年前
  mindspore-ci-bot 9930b18508 !165 Use mindspore. instead of mstype. in example 5 年前
  Xiaoda Zhang a153fad874 This commit is to separate the computation cost and memory cost in auto_parallel. Some related memory correction is removed. 5 年前
  zhaozhenlong f9d180d413 add api image gradients 5 年前
  chenhaozhe d88dbbb138 pass auto mixed precision flag to ge init options 5 年前
  mindspore-ci-bot 0d838c7c9b !161 Edit loss_scale to fit GPU 5 年前
  mindspore-ci-bot a24297f547 !122 Support to config whether to save integeated checkpoint, in auto model parallel scene 5 年前
  zhoufeng 27e49d1415 Distinguish package name according to hardware platform 5 年前
  VectorSL d0c24fb706 update lossscale for gpu 5 年前
  mindspore-ci-bot 993e69c76e !139 remove ENABLE_MINDRECORD flag 5 年前
  Jonathan Yan 243120bfa3 Merge branch 'DLTJ' of https://gitee.com/jonwe/ms_incubator into DLTJ 5 年前
  Jonathan Yan 9d0fde29f4 remove ENABLE_MINDRECORD flag 5 年前
  Alexey Shevlyakov 84d780c1a4 remove make_unique.h 5 年前
  guohongzilong c4f9230f03 usr mindspore. instead of mstype. 5 年前
  mindspore-ci-bot 9aab1613e7 !163 dataset: re-add Parameter check for class Schema 5 年前
  mindspore-ci-bot 475e858474 !158 fix: resolve MindDataset hung when field not in index when using block_reader 5 年前
  ms_yan ff38eff9ae add parameter check for Class Schema 5 年前
  mindspore-ci-bot fe4c815d1c !151 updata mkl-dnn link and md5 5 年前
  mindspore-ci-bot a4b9f8a728 !120 Fix test for RandomCropDecodeResizeOp. 5 年前
  WeibiaoYu 22c6baeea2 Support to config whether to save integeated checkpoint, in auto model parallel scene 5 年前
  mindspore-ci-bot 9c8a0b7f7e !22 use string::find instead of equal to distinguish training graph 5 年前
  jonyguo c688265671 fix: when use MindDataset block_reade=True hung 5 年前
  chenhaozhe b61ad0a5a7 use find instead of equal to distinguish training graph 5 年前
  mindspore-ci-bot 9e17b996c7 !157 回退 'Pull Request !133 : Edit loss_scale to fit GPU' 5 年前
  mindspore-ci-bot 06e6c6b880 !128 refactor callback for ge backend 5 年前
  chengang cc9a0e1310 回退 'Pull Request !133 : Edit loss_scale to fit GPU' 5 年前
  dengwentao 2303719352 updata mkl-dnn link and md5 5 年前
  mindspore-ci-bot 315036b1a5 !133 Edit loss_scale to fit GPU 5 年前