91 Commits (8d00a8d803dba83322c19fc7aa60d81452f7b699)

Author SHA1 Message Date
  zuochuanyong 8fa68ebd98 fix Conv3D precision under fp16 4 years ago
  djc b077aa1cab [feat] [assistant] [I3T96T] add new Dataset operator CMUARCTICDataset 4 years ago
  djc 4e6f7dc97d [feat] [assistant] [I3T96X] add new Dataset operator LibriSpeechDataset 4 years ago
  i-robot 09dba31600 !21444 Fix Reshape Attrs 4 years ago
  Yang Jiao 2b4a784b86 fix reshape 4 years ago
  hwjiaorui e97df3a58f clean code 4 years ago
  cristoval bf317aa220 code review fix 4 years ago
  i-robot ec7ce3494a !20466 GPU codex fix 4 years ago
  wilfChen da3123cffb remove useless header 4 years ago
  VectorSL 089d8b37e5 fix code check 4 years ago
  wilfChen 21da0ae64f check dependency 4 years ago
  i-robot 8fe3da0ddc !17819 Add all gather fusion and concat pass for gpu 4 years ago
  ZPaC 35b639868d Add all gather fusion and concat pass for gpu 4 years ago
  Margaret_wangrui f5137e379a clean codedex 4 years ago
  mindspore-ci-bot 4e741f8aa6 !16701 gpu matmul and biasadd fusion 4 years ago
  mindspore-ci-bot 329546b83d !17241 clean code check warnings 4 years ago
  lizhenyu 01c64e8b14 fix code check warnings 4 years ago
  VectorSL 03210aee81 clean code2 4 years ago
  mindspore-ci-bot 2173d08ba1 !16978 fix codecheck and pclint 4 years ago
  limingqi107 c22185d586 fix codecheck and pclint 4 years ago
  wilfChen 020300f611 codecheck 4 years ago
  wilfChen b2242d13c4 gpu matmul biasadd fusion 4 years ago
  VectorSL c00a9ced3f clean code 4 years ago
  buxue 82f59644cf develop hswish and hswish grad gpu operators 4 years ago
  mindspore-ci-bot 0de8504d4b !16210 update apply_momentum_weight_scale_fusion pass 4 years ago
  mindspore-ci-bot b596894d07 !16199 Support multiple types in GPU Print 4 years ago
  huangbingjian 9424aedec3 update apply_momentum_weight_scale_fusion pass 4 years ago
  TFBunny 33255dbf60 support multiple types in print 4 years ago
  TFBunny 9eae68efaa add gpu BCEWithLogitsLoss kernel 4 years ago
  mindspore-ci-bot 3cffa4752e !15117 fix codedex 5 years ago
  limingqi107 b3a5ccebc3 fix codedex 5 years ago
  huangbingjian b803887480 update adam_fusion and adam_weight_decay_fusion 5 years ago
  wilfChen 8a7b568203 add relu fusion check 5 years ago
  wilfChen 943b992458 trt converter 5 years ago
  dingpeifei 3c9d8cb073 The input and output of batchnorm reverse operator increase pass in ascend platform under the mode of pynitve 5 years ago
  mindspore-ci-bot 8e8f3043f9 !12115 IR operators of GPU and CPU are unified as batchnorm 5 years ago
  TFBunny 32e86f4166 hot fix for print 5 years ago
  dingpeifei 87e41aaeee IR operators of GPU and CPU are unified as batchnorm 5 years ago
  TFBunny 4d35303265 support string in GPU print 5 years ago
  He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad 5 years ago
  l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master 5 years ago
  yuchaojie 1932d87a26 update some op's attr name 5 years ago
  wilfChen 09e10e18bb momentum weightdecay fusion 5 years ago
  VectorSL 54a496edbc fix momentum-cast fusion 5 years ago
  wilfChen c1d3bd2160 relu optimize 5 years ago
  huanghui e17dd84c0b add trace managager around backend opt 5 years ago
  mindspore-ci-bot 3f75f13556 !8648 PyNative Performance Optimization 5 years ago
  caifubi c7d6997819 pynative host device parallel 5 years ago
  lizhenyu 094f0b2a07 bugfix:fused batch norm op's input channel nums should be a multiple of 4 5 years ago
  wilfChen 2291b7f2e6 dynamic shape check 5 years ago