160 Commits (7df661fbafec45ff00a034f331ccca8035ced0e0)

Author SHA1 Message Date
  mindspore-ci-bot 5fd3d140b6 !13344 add DeviceContext module 5 years ago
  lizhenyu 95565aa7b8 add hardware abstract layer 5 years ago
  luopengting c8ba7694c5 refactor RDR to support single name 5 years ago
  TFBunny 4d35303265 support string in GPU print 5 years ago
  mindspore-ci-bot 6f6d14d944 !13102 Add unique id for .dat and .dot file to avoid covering 5 years ago
  huanghui a2ba47e18a 1. Add unique id for .dat and .dot file to avoid covering 5 years ago
  Islam Amin cbbffbedef fix gpu dump naming 5 years ago
  mindspore-ci-bot a21c8e13b5 !13010 Add device id log 5 years ago
  tanghuikang 6102202abd Not save InitDatasetQueue and GetNext op in PyNative Mode 5 years ago
  ZPaC f2edee750a Add device id log 5 years ago
  mindspore-ci-bot 7104e42304 !12808 Add graph_ to execution order filename 5 years ago
  caifubi 171b468bb3 PyNative AllReduce Bucket 5 years ago
  Islam Amin ed2f8876b9 adding graph_ to exec order filename 5 years ago
  mindspore-ci-bot 7296659f14 !12764 [Ascend][GPU] Add execution order dumping of final execution graphs 5 years ago
  mindspore-ci-bot 00f25c8409 !12728 fix precision error after cache modification 5 years ago
  Islam Amin 187222d461 Adding dump of order of execution for final exec graphs on ascend and gpu 5 years ago
  dayschan c165ab5bb1 Combine the GraphKernelOptimization of Gpu and Ascend 5 years ago
  simson c29d8f66d8 fix precision error after cache modification 5 years ago
  mindspore-ci-bot 5524280075 !12550 [MS][RDR] recording func_graph in pipeline and task debug info 5 years ago
  mindspore-ci-bot 4dedab3775 !12593 Not AllocateMemory when CompileGraph in PyNative mode 5 years ago
  louei5 9a48405a41 recording func_graph in pipeline and task debug information 5 years ago
  Islam Amin 722eb2ec5a ascend graph dump trigger at data dump 5 years ago
  tanghuikang c346a96529 Not AllocateMemory when CompileGraph in PyNative mode 5 years ago
  He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad 5 years ago
  mindspore-ci-bot 0ff27ef3b4 !11930 【GraphKernel】Replace Assign with InplaceAssign 5 years ago
  mindspore-ci-bot a24ff36d9c !11777 stitch fusion 5 years ago
  dayschan 08345c54ea [GraphKernel] Replace Assign with InplaceAssign 5 years ago
  dayschan 8a09279ec3 Moved ShapeOpsSplitter before GraphKernelSplitter, changed it to process sub func_graph only. 5 years ago
  r1chardf1d0 9d6392c5c5 stitch info 5 years ago
  mindspore-ci-bot 4364abc7ee !11798 Support RunOpsInGraph on CPU&GPU in pynative mode 5 years ago
  tanghuikang 6f2cd92aba Support RunOpsInGraph on CPU&GPU in pynative mode 5 years ago
  mindspore-ci-bot 6e97c0004e !11689 gpu support serving basic 5 years ago
  wilfChen a911b9ef9e mindspore serving support gpu backend 5 years ago
  tronzhang d078cbfa99 support parallel fusion 5 years ago
  dayschan 27b4e1653a Raise akg ReduceSum precision 5 years ago
  chujinjin 9104ffaafa fix inceptionv3 kernel build error in pynative 5 years ago
  chujinjin ade9a82c2b fix device memory leak 5 years ago
  mindspore-ci-bot 9591c325f7 !10865 Raise exception when sync stream failed 5 years ago
  dayschan 8af78cd5ce Added ExpandDims into GPU fusion list 5 years ago
  caifubi ea2aa7dec4 Raise exception when Sync stream failed 5 years ago
  dayschan 26ac9167f8 Enhance the fusion capacity for getitem nodes. 5 years ago
  wilfChen 09e10e18bb momentum weightdecay fusion 5 years ago
  mindspore-ci-bot a4b010cea8 !9746 add ps cache 5 years ago
  mindspore-ci-bot be4e91339f !9661 gpu relu optimize 5 years ago
  lizhenyu e3f7ae61db add ps cache manager 5 years ago
  mindspore-ci-bot 2799b6d35f !9683 [Debugger] Performance and state improvements 5 years ago
  wilfChen c1d3bd2160 relu optimize 5 years ago
  tronzhang 056d7ffc56 clean batch buffer in once 5 years ago
  Harshvardhan Gupta dd0084c52b improve perf, keep consistent tensor state, fix recheck, check weights at step end 5 years ago
  mindspore-ci-bot d38f8205dc !8987 support getnext in pynative mode 5 years ago