16 Commits (master)

Author SHA1 Message Date
  hanhuifeng2020 0fdfe435f0 For the transdata, auto_inline is not enabled by default 4 years ago
  r1chardf1d0 f049354d88 stitch fusion code refactor, fix some bug 4 years ago
  mindspore-ci-bot 611e8c34e2 !72 unify the calling of some global configs 5 years ago
  mindspore-ci-bot b1f12ec2b2 !53 enable online tuning for composite ops on ascend 5 years ago
  dabaiji 4934596bad add online tuner api 5 years ago
  ckey_Dou 05c7f60df6 1. append pid after cuda_meta 5 years ago
  lingyunli63 5f5125d442 enable autoinline for matmul 5 years ago
  r1chardf1d0 6a01831f58 optimize stitch fusion 5 years ago
  wYann c9b5b776b2 eliminate the switch 'scalar_rearrange' 5 years ago
  mindspore-ci-bot 23fccb13fd !27 refactor the code related to build of composite 5 years ago
  hanhuifeng2020 ccd71b1d93 Fix the bug of the TransData operator when shape is not divisible by cube_size(16) 5 years ago
  hanhuifeng2020 b061664070 refactor the code related to build of composite 5 years ago
  hanhuifeng2020 1021d3cdb7 Some modifications about composite: 5 years ago
  Gaoxiong 66bebb6f5c support composite topi by irbuilder 5 years ago
  lingyunli63 fdb48f9308 set bypass attr from tuned repo for matmul 5 years ago
  xsmq 46f4c28fcf init 5 years ago