13694 Commits (bcc6e1ca282d104e2cbb9b8af55ec9c43f5c434f)
 

Author SHA1 Message Date
  mindspore-ci-bot bcc6e1ca28 !8655 add output of example of some operations. 5 years ago
  mindspore-ci-bot 418c8b5e3f !8550 add gpu step trace 5 years ago
  mindspore-ci-bot ee72de1db2 !8441 Add Parallel Implements of UniformCandidateSampler 5 years ago
  mindspore-ci-bot cd6236c0a0 !8712 update pipeline parallel interface 5 years ago
  mindspore-ci-bot 071366a5f8 !7776 add broadcast 5 years ago
  mindspore-ci-bot fb0e866ad1 !8269 forward unique dynamic shape 5 years ago
  mindspore-ci-bot f657fcb155 !8698 added libevent pthread 5 years ago
  mindspore-ci-bot 5a203d08d0 !8708 fix fix SmoothL1lossGrad beta attr problem. 5 years ago
  mindspore-ci-bot e16661d2d9 !8001 Adapte nn.LSTM for Ascend. 5 years ago
  mindspore-ci-bot 84957cc4a7 !8650 remove useless config's parameters 5 years ago
  mindspore-ci-bot 8da2a59764 !8725 add definition of scale for GPT 5 years ago
  mindspore-ci-bot 168c79e13d !8664 [MD] Fix a minddata issue that pyfunc multiprogress can't exit normally 5 years ago
  yao_yf 31819bb4a7 support forward unique 5 years ago
  mindspore-ci-bot de60d1d98f !8684 [MS][LITE] remove internal 5 years ago
  mindspore-ci-bot 337805aa55 !8326 fix java some unused code 5 years ago
  anancds 18d34ed47f added libevent pthread 5 years ago
  mindspore-ci-bot 90eb272751 !8641 Updating notes on pynative examples of each class in nn_layer folder 5 years ago
  mindspore-ci-bot 232dff3598 !8685 [GraphKernel] For fp16 value, declare fp32 firstly and than cast to fp16 in expander 5 years ago
  mindspore-ci-bot 3b946d4eb2 !8678 expand logsoftmax and grad, delete cast in softmax and fix layernorm compute dsl 5 years ago
  mindspore-ci-bot 286f5b05f7 !8493 【GraphKernel】Fuse composite ops separated by GetItem nodes 5 years ago
  yangzhenzhang 278e82a849 update pipeline parallel 5 years ago
  mindspore-ci-bot a6679511ed !8581 add graph dependency 5 years ago
  gzhcv 6f6b56bfe1 add gpu step_trace 5 years ago
  alouhahaha 641888f603 add definition of scale for GPT 5 years ago
  mindspore-ci-bot 37a722fe77 !8559 [MSLITE][Develop] optimize_softmax 5 years ago
  mindspore-ci-bot 5a602e5288 !8634 fix quant mindir inference 5 years ago
  mindspore-ci-bot 103c42b683 !8662 [MD] Fix pad of lite-cv core dump 5 years ago
  mindspore-ci-bot 9969c83f75 !8689 [GraphKernel] Split shape ops for more fusion opportunity. 5 years ago
  xiefangqi 4447d08ca9 fix minddata multiprogress hang problem 5 years ago
  mindspore-ci-bot d549676d36 !8693 Add Comments for UnsortedSegmentOps 5 years ago
  sunsuodong 40a6fd8887 optimize_softmax 5 years ago
  tronzhang 80f071e9fa declare fp32 and than cast to fp16 in expander 5 years ago
  mindspore-ci-bot f40a4781e4 !8656 Adapt DynamicGRUV2Grad for Ascend new backend. 5 years ago
  jianghui58 5109cf7f05 remove internal 5 years ago
  mindspore-ci-bot 3939874b67 !8645 [MS][LITE][Develop]optimization for quantized mobilenet_v2 5 years ago
  mindspore-ci-bot 9d260cbf56 !8700 Optimize performance of PyNative 5 years ago
  huangxinjing 2730cef047 Uniform Sampler Base Update 5 years ago
  jinyaohui e6f9806cfb add broadcast 5 years ago
  mindspore-ci-bot f2f79029d6 !8690 Support MetaTensor in Equal's infer_value 5 years ago
  yankai 20a5f88bae fix mindir 5 years ago
  mindspore-ci-bot 7fd2db437b !8484 Add Digamma op 5 years ago
  mindspore-ci-bot 00b41244ac !8654 fix train failed of resnet_thor 5 years ago
  mindspore-ci-bot e3b1814401 !8713 Fix for GetDatasetSize issue in TextFile 5 years ago
  mindspore-ci-bot bf447ff51a !8672 add removal pass for dataset getters 5 years ago
  Mahdi 449e1526dc Fixed GetDatasetSize for TextFile 5 years ago
  peixu_ren 4aa836dae1 Add Digamma op 5 years ago
  Zirui Wu ff5999fc2f add removal pass for getters 5 years ago
  mindspore-ci-bot fedb225a96 !8667 fix cast issue 5 years ago
  mindspore-ci-bot 78fc3e722f !8704 BugFix for GPT 5 years ago
  mindspore-ci-bot d7e3b18b6f !8665 check kernel type and do SyncStream for hccl dynamic kernels 5 years ago