15400 Commits (3b9340f7261e11dbc44226b32b32a4811f2a51a7)
 

Author SHA1 Message Date
  tronzhang 17d6f1c2f9 add option for graph kernel and mixed precision 5 years ago
  tanghuikang 82450afa9e Optimize memory using in pynative mode 5 years ago
  liuzhongkai 0a5ef19063 win add sse/avx logic 5 years ago
  mindspore-ci-bot 4a7c87f442 !9560 [lite]reconstruct onnx 5 years ago
  peixu_ren 16bbdc4eca Add some examples for random ops 5 years ago
  mindspore-ci-bot c1f65f7460 !9678 【GraphKernel】Fix precision problem 5 years ago
  mwang 1e90c7997e fix readme 5 years ago
  tronzhang 056d7ffc56 clean batch buffer in once 5 years ago
  zhouyuanshen e9aca01620 add support to reduceAny and reduceAll on gpu 5 years ago
  mindspore-ci-bot 29db53c2ba !9675 add ps cache 5 years ago
  lichenever 07bc550c17 fix_PipelineSplit_bug 5 years ago
  mindspore-ci-bot 8a7793ecb5 !9523 Add MaximumGrad 5 years ago
  mindspore-ci-bot 7a92deee62 !9563 tod examples 5 years ago
  mindspore-ci-bot f9c9d0a1c4 !9637 [MSLITE] layer norm fp32 optimize 5 years ago
  mindspore-ci-bot 56ce0f4a27 !9648 Fix the bug of toolbox 5 years ago
  mindspore-ci-bot 37390519cb !9680 Make constant numbers to tensors to avoid a bug 5 years ago
  mindspore-ci-bot eed0b3ea86 !9613 Refactor explainer for better usability 5 years ago
  mindspore-ci-bot b724bac9fc !9686 Fix a bug of naming the variable in LGamma 5 years ago
  looop5 fa519433ef expand ClipByNormNoDivSum 5 years ago
  mindspore-ci-bot 0ea4d9bbb7 !9421 mindspore lite support npu 5 years ago
  mindspore-ci-bot 00c6f4822f !9670 Removing redundant code 5 years ago
  wandongdong 68c7ba09d9 support weight quant for opencl 5 years ago
  mindspore-ci-bot 317a97e6b9 !9336 auto num_parallel_workers setup 5 years ago
  mindspore-ci-bot a2c80435ce !9685 Fix a core dump in TreeConsumer::Terminate() plus minor cache fixes 5 years ago
  caozhou cf36a05e81 add to api document 5 years ago
  Zirui Wu d6df1b0832 Implemented AutoNumWorker Pass which sets num_workers of selected parallel ops automatically if enabled 5 years ago
  alex-yuyue 5250b327ae Fix some non-minddata typos 5 years ago
  Lixia Chen 32b82c2737 Fix a core dump in TreeConsumer::Terminate() 5 years ago
  Harshvardhan Gupta dd0084c52b improve perf, keep consistent tensor state, fix recheck, check weights at step end 5 years ago
  xuanyue e7151c194c reconstruct onnx 5 years ago
  peixu_ren c1f645931c Fix a bug of naming the variable in LGamma 5 years ago
  zhoufeng cd1ce73a25 remove pybind calling in cxx library 5 years ago
  TFbunny 1ab6f73d49 Update tensor shape from int to size_t in scatterop 5 years ago
  dayschan 297f075dca Fix precision problem 5 years ago
  zhengqihao bc6929cbf9 Add MaximumGrad 5 years ago
  mindspore-ci-bot 934005f390 !9636 Remove batch_map multi-process test case 5 years ago
  lixiaohui 2e4b686408 refactor explain core code 5 years ago
  peixu_ren 0a8a5a9a91 Make const numbers to tensors to avoid a bug 5 years ago
  yoni 1f66e59648 tod enhanced examples 5 years ago
  mindspore-ci-bot 48f83e9039 !9657 [MD] fix bug Updated add rotate and orientation to lite 5 years ago
  limingqi107 660a087ffd add ps cache 5 years ago
  mindspore-ci-bot 201c5ff4ee !9486 output warning info when memroy > 95% in batchop timestamp 5 years ago
  mindspore-ci-bot e1e8f1d429 !9652 add ps cache 5 years ago
  mindspore-ci-bot eb4d14faff !9640 redundant code 5 years ago
  mindspore-ci-bot 4d3244bf34 !9622 Add st for data dump 5 years ago
  shenwei41 d6a770aecb Remove redundant codes 5 years ago
  chenjianping ef72f405e0 unstack remote num 5 years ago
  mindspore-ci-bot 7f77cb53b8 !9643 fix bug for hcom parallel 5 years ago
  mindspore-ci-bot ec765bdfdb !9653 [AutoParallel] handle the PACK operator 5 years ago
  mindspore-ci-bot 7bfe1a5d34 !9663 remove unused parameters in config 5 years ago