33 Commits (bc84e9b4e7db505bf4c8caf508ad393f98fdfd5a)

Author SHA1 Message Date
  mindspore-ci-bot 037a121e05 !9691 expand ClipByNormNoDivSum in graph kernel 5 years ago
  dayschan 85b69bf91f Add a float16 restriction in the solution of reduction op's precision problem in graph splitter. 5 years ago
  looop5 fa519433ef expand ClipByNormNoDivSum 5 years ago
  dayschan 297f075dca Fix precision problem 5 years ago
  zhupuxu 4f569677b7 redundant_codes 5 years ago
  mindspore-ci-bot 7b311f7d2a !9570 Modifications for GraphKernel 5 years ago
  dayschan 6be3cc6f0d consider atomic_add strategy in graph splitter; fixbugs; fuse and inline single op 5 years ago
  looop5 848be9b07c add tile to expand list 5 years ago
  dayschan e5306b913d GraphKernel Fuser 5 years ago
  tronzhang 2190da9946 support atomic clean and change package for akg. 5 years ago
  mindspore-ci-bot ebef1df00b !8994 split dropout op and expand dropout 5 years ago
  zengzitao 3ef0e9f053 substitute dropout by cudnnuniformreal and dropout 5 years ago
  Gaoxiong e4c3d3e0e9 update graph kernel split model 5 years ago
  mindspore-ci-bot 232dff3598 !8685 [GraphKernel] For fp16 value, declare fp32 firstly and than cast to fp16 in expander 5 years ago
  mindspore-ci-bot 3b946d4eb2 !8678 expand logsoftmax and grad, delete cast in softmax and fix layernorm compute dsl 5 years ago
  tronzhang 80f071e9fa declare fp32 and than cast to fp16 in expander 5 years ago
  tronzhang 9d7494f4df split shape ops for more fusion pportunity. 5 years ago
  zengzitao 266bfa50bf expand logsoftmax and logsoftmax_grad, delete softmax's cast and fix layernorm op 5 years ago
  zengzitao 326540cbbd expand layernorm_grad op 5 years ago
  zengzitao 28f1db74dd expand maximum_grad minimum_grad dropout_grad op 5 years ago
  dayschan 195b1fe8d5 Add Transpose into fusible list. 5 years ago
  zengzitao db27783d54 expand tanh_grad and reduce_mean, fix bug and add test_case in ci 5 years ago
  zengzitao 53043ae18f support expand fused_adam and fused_adam_weight_decay op 5 years ago
  zengzitao 5cfa172720 expand gelu and gelugrad op 5 years ago
  mindspore-ci-bot 5c4940cdcc !7892 Convert non-scalar tensor to parameter 5 years ago
  zengzitao febdb1850c expand bias_add and bias_add_grad op 5 years ago
  dayschan b6c2812a29 Convert non-scalar tensor to parameter 5 years ago
  chenzomi 44bf4c3e37 [ME] format code 5 years ago
  dayschan 7599686a72 GraphKernel supports multi-output kernels 5 years ago
  lingyunli63 dd48f10c3d add assign ops in composite_topi 5 years ago
  root 4e85071055 redundant codes clean 5 years ago
  Gaoxiong 1cb8b803f9 update usage info 5 years ago
  dayschan 37a48f6aac GraphKernel supports GPU 5 years ago