59 Commits (6fcd6cab684bd2d2ae2da5613efa3aa4e2cd7d5a)

Author SHA1 Message Date
  hanhuifeng2020 bc46d644fe [GraphKernel]Support reshape/elewise/broadcast+transdata fusion 4 years ago
  mindspore-ci-bot 0887d35b1c !15951 [GraphKernel]add the attribute reduce_output_fuse to enable fuse for the reduce_output on Ascend 4 years ago
  mindspore-ci-bot 1827697642 !15961 Eliminate recursion call in split model 4 years ago
  wenfangpei c41875b318 adapt expanders of some ops from gpu to ascend 4 years ago
  Gaoxiong 4bc67f38de eliminate recursion call 4 years ago
  hanhuifeng2020 425d401e85 [GraphKernel]add the attr reduce_output_fuse to enable fuse for reduce_output on Ascend 4 years ago
  mindspore-ci-bot d6f58cb765 !15658 Reduce recursion overhead of split model 4 years ago
  Gaoxiong 71002ed19d reduce recursion overhead of split model 4 years ago
  zengzitao 8dcff8d83c refactor tile op and in expander open on gpu 4 years ago
  r1chardf1d0 5c5d125b1d optimize stitch fusion strategy 5 years ago
  hanhuifeng2020 25505642ce enable GraphKernel for TransData 5 years ago
  lingyunli63 c48c2430f0 fuse matmul and elementwise in graphkernel 5 years ago
  mindspore-ci-bot cd002cb7f7 !14893 enable stitch fusion on bert 4 years ago
  r1chardf1d0 3b32995936 enable stitch fusion on bert 5 years ago
  chenlei_autodiff 13fbfca6b9 [graph kernel] add expander ops. 5 years ago
  wenfangpei 83399c1b8d adapt for layermorm C++ code 5 years ago
  mindspore-ci-bot ddf75da542 !14085 [GraphKernel] add some expander ops 5 years ago
  chenlei_autodiff f4289d40f3 add graph kernel expander ops. 5 years ago
  tronzhang 87bf1ec80f delete mark_interface_fusion and tensor reuse frontend pass for graph kernel 5 years ago
  lingyunli63 4b966ed40d support matmul on D 5 years ago
  huangbingjian 72ae1799f3 remove control_depend from py file 5 years ago
  tronzhang 7252ffb66b pipeline optimization for parallel fusion 5 years ago
  mindspore-ci-bot d285692217 !12852 Change maketuple in coreops 5 years ago
  l00591931 680324f225 Change make tuple in core.ops 5 years ago
  dayschan 454500309c add OpInfer for op Select 5 years ago
  dayschan 7beca18f3c Refactor GraphKernelExpander (3rd submission) 5 years ago
  dayschan 9d572f3963 Refactor GraphKernelExpander (2nd submission) 5 years ago
  dayschan e0e6c39eae Refactor GraphKernelExpander (1st submission) 5 years ago
  mindspore-ci-bot a24ff36d9c !11777 stitch fusion 5 years ago
  l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master 5 years ago
  r1chardf1d0 9d6392c5c5 stitch info 5 years ago
  tronzhang d078cbfa99 support parallel fusion 5 years ago
  tronzhang ad59129c07 if input is none, make it a null list 5 years ago
  Gaoxiong cc89c3f896 set default mode for output reshape op 5 years ago
  Gaoxiong 32e19e83da update graph kernel split model for Ascend 5 years ago
  mindspore-ci-bot 639e0c5fbd !10257 【GraphKernel】Enhance the fusion capacity for getitem nodes 5 years ago
  dayschan 26ac9167f8 Enhance the fusion capacity for getitem nodes. 5 years ago
  looop5 8bbe723603 add Tile infer shape function 5 years ago
  dayschan 85b69bf91f Add a float16 restriction in the solution of reduction op's precision problem in graph splitter. 5 years ago
  dayschan 297f075dca Fix precision problem 5 years ago
  mindspore-ci-bot 7b311f7d2a !9570 Modifications for GraphKernel 5 years ago
  dayschan 6be3cc6f0d consider atomic_add strategy in graph splitter; fixbugs; fuse and inline single op 5 years ago
  looop5 848be9b07c add tile to expand list 5 years ago
  tronzhang 2190da9946 support atomic clean and change package for akg. 5 years ago
  Gaoxiong e4c3d3e0e9 update graph kernel split model 5 years ago
  mindspore-ci-bot 232dff3598 !8685 [GraphKernel] For fp16 value, declare fp32 firstly and than cast to fp16 in expander 5 years ago
  tronzhang 80f071e9fa declare fp32 and than cast to fp16 in expander 5 years ago
  tronzhang 9d7494f4df split shape ops for more fusion pportunity. 5 years ago
  dayschan 195b1fe8d5 Add Transpose into fusible list. 5 years ago
  zengzitao 53043ae18f support expand fused_adam and fused_adam_weight_decay op 5 years ago