47 Commits (6fcd6cab684bd2d2ae2da5613efa3aa4e2cd7d5a)

Author SHA1 Message Date
  mindspore-ci-bot ed539597c2 !15415 [GraphKernel]adapt for logsoftmax in ascend 4 years ago
  wenfangpei 4174a7b38f expanders of some fusion ops 4 years ago
  wenfangpei db8256e61f adapt for logsoftmax in ascend 4 years ago
  wenfangpei c41875b318 adapt expanders of some ops from gpu to ascend 4 years ago
  mindspore-ci-bot 52e7f51970 !15741 [GraphKernel] batchnorm expander supports when first input is float16 4 years ago
  looop5 24f441ba33 batchnorm expander supports when first input is float16 4 years ago
  chenlei_autodiff fd227bb448 [graph kernel] clean code for expanders. 4 years ago
  mindspore-ci-bot 168c64b60d !15648 [GraphKernel] negative axis in Squeeze expander. 4 years ago
  zengzitao 8dcff8d83c refactor tile op and in expander open on gpu 4 years ago
  chenlei_autodiff b419f60b0d [GraphKernel] negative axis in Squeeze expander. 4 years ago
  mindspore-ci-bot 5b4685c5ea !14604 [GraphKernel] add some expander ops 4 years ago
  chenlei_autodiff 13fbfca6b9 [graph kernel] add expander ops. 5 years ago
  wenfangpei b9715db358 bugfix in expanders of layernorm 5 years ago
  wenfangpei 66d28af79e adapt for layernorm in ascend 5 years ago
  wenfangpei a4ad6066b1 expander lamb_apply_weight_assign 5 years ago
  mindspore-ci-bot ddf75da542 !14085 [GraphKernel] add some expander ops 5 years ago
  chenlei_autodiff f4289d40f3 add graph kernel expander ops. 5 years ago
  tronzhang 87bf1ec80f delete mark_interface_fusion and tensor reuse frontend pass for graph kernel 5 years ago
  mindspore-ci-bot 5b95409022 !13512 add some expander ops 5 years ago
  mindspore-ci-bot 2fadad0875 !13121 expander lamb_apply_optimizer_assign 5 years ago
  wenfangpei 043a558ae2 expander lamb_apply_optimizer_assign 5 years ago
  zengzitao d0a656f3cd add some expander ops 5 years ago
  dayschan a2967330ea Normalize the Reduce nodes' axis in GraphKernel 5 years ago
  dayschan 7beca18f3c Refactor GraphKernelExpander (3rd submission) 5 years ago
  dayschan 9d572f3963 Refactor GraphKernelExpander (2nd submission) 5 years ago
  dayschan e0e6c39eae Refactor GraphKernelExpander (1st submission) 5 years ago
  jinyaohui 30a27b2adb modify Gelu、FastGelu to GeLU and FastGeLU 5 years ago
  l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master 5 years ago
  looop5 8bbe723603 add Tile infer shape function 5 years ago
  looop5 56fa56b173 add graph kernel testcases 5 years ago
  mindspore-ci-bot 037a121e05 !9691 expand ClipByNormNoDivSum in graph kernel 5 years ago
  looop5 fa519433ef expand ClipByNormNoDivSum 5 years ago
  zhupuxu 4f569677b7 redundant_codes 5 years ago
  looop5 848be9b07c add tile to expand list 5 years ago
  dayschan e5306b913d GraphKernel Fuser 5 years ago
  tronzhang 2190da9946 support atomic clean and change package for akg. 5 years ago
  zengzitao 3ef0e9f053 substitute dropout by cudnnuniformreal and dropout 5 years ago
  mindspore-ci-bot 232dff3598 !8685 [GraphKernel] For fp16 value, declare fp32 firstly and than cast to fp16 in expander 5 years ago
  tronzhang 80f071e9fa declare fp32 and than cast to fp16 in expander 5 years ago
  zengzitao 266bfa50bf expand logsoftmax and logsoftmax_grad, delete softmax's cast and fix layernorm op 5 years ago
  zengzitao 326540cbbd expand layernorm_grad op 5 years ago
  zengzitao 28f1db74dd expand maximum_grad minimum_grad dropout_grad op 5 years ago
  zengzitao db27783d54 expand tanh_grad and reduce_mean, fix bug and add test_case in ci 5 years ago
  zengzitao 53043ae18f support expand fused_adam and fused_adam_weight_decay op 5 years ago
  zengzitao 5cfa172720 expand gelu and gelugrad op 5 years ago
  zengzitao febdb1850c expand bias_add and bias_add_grad op 5 years ago
  dayschan 37a48f6aac GraphKernel supports GPU 5 years ago