45 Commits (052d8e2c9940aa8a7b147b881958642bfcc33fe7)

Author SHA1 Message Date
  sabrinasun 220245f592 add security isolation to online and offline debugger 4 years ago
  liangzelang 1832d7c152 Use rtMemcpy trans data in Ascend instead of device -> host -> device 4 years ago
  gaoyong10 e7f6b034cf Fix double output for single device address 4 years ago
  caifubi 537fce0ee1 PyNative Kernel Parallel Build 4 years ago
  zuochuanyong 8fa68ebd98 fix Conv3D precision under fp16 4 years ago
  ms_yan 36a8886ca2 Revert "[feat] [assistant] [I3T96T] add new Dataset operator CMUARCTICDataset" 4 years ago
  djc 4e6f7dc97d [feat] [assistant] [I3T96X] add new Dataset operator LibriSpeechDataset 4 years ago
  zjun 35aab6144d Fix pynative memory leak 4 years ago
  kswang 3247c00555 optimize heter memcpy 4 years ago
  i-robot 4861711676 !18107 dump and offline debug fixes 4 years ago
  i-robot eaac4f47b3 !18058 Update graph input shape 4 years ago
  John Tzanakakis ac1847ffac fix iter 0 and iter 1 being dumped in dir 0, make op_debug_mode optional for sync mode, read input files for offline debugger 4 years ago
  wilfChen 27ed501716 graph input dynamic 4 years ago
  wilfChen 2e6afc07ac graph input dynamic 4 years ago
  chujinjin 90feb6a6d2 fix bcewithlogitsloss op error in pynative 4 years ago
  zhoufeng 0787efad03 move conv transpose pass from common to unify mindir, for pynative 4 years ago
  lvchangquan 0b09fdf94c fix an allreduce bug with two streams sync problem 5 years ago
  limingqi107 22972a89a7 support the output address of graph reapply 5 years ago
  kswang 2a48b2ecb8 reconstruct session code 5 years ago
  tanghuikang c8a14ba016 Clean code 5 years ago
  kswang 97a97e02db extract load input 5 years ago
  caifubi 171b468bb3 PyNative AllReduce Bucket 5 years ago
  wilfChen a911b9ef9e mindspore serving support gpu backend 5 years ago
  jjfeing 1984cf8e20 unify mindir 5 years ago
  Harshvardhan Gupta dd0084c52b improve perf, keep consistent tensor state, fix recheck, check weights at step end 5 years ago
  mindspore-ci-bot d38f8205dc !8987 support getnext in pynative mode 5 years ago
  lvliang 8984cc9c03 pynative-support-dynamic-op-run-in-gpu 5 years ago
  chujinjin af031410bb support getnext in pynative 5 years ago
  caifubi d44dd4f786 Move BuildOp into RunOp 5 years ago
  HulkTang c36b477568 Run ops one by one in pynative bp graph 5 years ago
  Yi Huaijie d7faa77b5e support int64 shape 5 years ago
  mindspore-ci-bot c6246d7a7e !7908 add reduce precision in pynative mode 5 years ago
  chujinjin 9197d9f2ee add reduce precision in pynative mode 5 years ago
  ZPaC 5059d8c3f9 Set gpu device id for multiple threads 5 years ago
  kswang 11989b5e30 enable async run 5 years ago
  John Tzanakakis 0e0d7eda19 code refactor 5 years ago
  dayschan 37a48f6aac GraphKernel supports GPU 5 years ago
  lizhenyu c3d6918649 add kernel select after optimize pass 5 years ago
  limingqi107 341200ab97 gpu kernel_info_setter code review 5 years ago
  lizhenyu 6fdd52080d add mode black list checker 5 years ago
  kswang 756bb6d53f async run graph 5 years ago
  kpy 570da089a8 set output value for dynamic graph 5 years ago
  John Tzanakakis b3c0eb61d5 GPU debugger - milestone 1 and GPU dump 5 years ago
  liubuyu 76dc80e7b7 Unified code style 5 years ago
  liubuyu 43c79eb853 mindspore path adjust 5 years ago