137 Commits (b56fc0c2af8a3dc8729237b5e1e6e4e4e5d45dfa)

Author SHA1 Message Date
  He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad 5 years ago
  mindspore-ci-bot 0ff27ef3b4 !11930 【GraphKernel】Replace Assign with InplaceAssign 5 years ago
  mindspore-ci-bot a24ff36d9c !11777 stitch fusion 5 years ago
  dayschan 08345c54ea [GraphKernel] Replace Assign with InplaceAssign 5 years ago
  dayschan 8a09279ec3 Moved ShapeOpsSplitter before GraphKernelSplitter, changed it to process sub func_graph only. 5 years ago
  r1chardf1d0 9d6392c5c5 stitch info 5 years ago
  mindspore-ci-bot 4364abc7ee !11798 Support RunOpsInGraph on CPU&GPU in pynative mode 5 years ago
  tanghuikang 6f2cd92aba Support RunOpsInGraph on CPU&GPU in pynative mode 5 years ago
  mindspore-ci-bot 6e97c0004e !11689 gpu support serving basic 5 years ago
  wilfChen a911b9ef9e mindspore serving support gpu backend 5 years ago
  tronzhang d078cbfa99 support parallel fusion 5 years ago
  dayschan 27b4e1653a Raise akg ReduceSum precision 5 years ago
  chujinjin 9104ffaafa fix inceptionv3 kernel build error in pynative 5 years ago
  chujinjin ade9a82c2b fix device memory leak 5 years ago
  mindspore-ci-bot 9591c325f7 !10865 Raise exception when sync stream failed 5 years ago
  dayschan 8af78cd5ce Added ExpandDims into GPU fusion list 5 years ago
  caifubi ea2aa7dec4 Raise exception when Sync stream failed 5 years ago
  dayschan 26ac9167f8 Enhance the fusion capacity for getitem nodes. 5 years ago
  wilfChen 09e10e18bb momentum weightdecay fusion 5 years ago
  mindspore-ci-bot a4b010cea8 !9746 add ps cache 5 years ago
  mindspore-ci-bot be4e91339f !9661 gpu relu optimize 5 years ago
  lizhenyu e3f7ae61db add ps cache manager 5 years ago
  mindspore-ci-bot 2799b6d35f !9683 [Debugger] Performance and state improvements 5 years ago
  wilfChen c1d3bd2160 relu optimize 5 years ago
  tronzhang 056d7ffc56 clean batch buffer in once 5 years ago
  Harshvardhan Gupta dd0084c52b improve perf, keep consistent tensor state, fix recheck, check weights at step end 5 years ago
  mindspore-ci-bot d38f8205dc !8987 support getnext in pynative mode 5 years ago
  mindspore-ci-bot 1a5dd4a711 !9390 Pynative support dynamic op run in gpu 5 years ago
  mindspore-ci-bot 95573571f0 !9511 Codedex change for tensor_loader 5 years ago
  lvliang 8984cc9c03 pynative-support-dynamic-op-run-in-gpu 5 years ago
  l00591931 1d1cab986d Codedex change for tensor_loader 5 years ago
  chujinjin af031410bb support getnext in pynative 5 years ago
  dayschan e5306b913d GraphKernel Fuser 5 years ago
  tronzhang 13126653ec process cast when activate graph kernel in amp 5 years ago
  tronzhang 2190da9946 support atomic clean and change package for akg. 5 years ago
  caifubi d44dd4f786 Move BuildOp into RunOp 5 years ago
  HulkTang c36b477568 Run ops one by one in pynative bp graph 5 years ago
  mindspore-ci-bot 3f75f13556 !8648 PyNative Performance Optimization 5 years ago
  caifubi c7d6997819 pynative host device parallel 5 years ago
  mindspore-ci-bot 270c156219 !8696 fix context null error 5 years ago
  kswang 62ae6802dc fix context null error 5 years ago
  John Tzanakakis d8f345fa57 some graph reading code is executed when debug is not enabled 5 years ago
  tronzhang 9d7494f4df split shape ops for more fusion pportunity. 5 years ago
  lichen_101010 1b6265fa43 Debugger multi-graph support implementation 5 years ago
  mindspore-ci-bot 7a3a4d3ad4 !8275 GPU add loopsink 5 years ago
  VectorSL 2cabcbcf7e gpu loopsink 5 years ago
  wilfChen e4e9362bd0 gpu suppor dynamic shape 5 years ago
  Yi Huaijie d7faa77b5e support int64 shape 5 years ago
  wangyue01 0c16b866fe Save graph before remove nop nodes 5 years ago
  mindspore-ci-bot 87dda46d17 !8026 Set context device id in gpu session 5 years ago