47 Commits (9f62227f99367f491b182d0bed6be760b6e205e3)

Author SHA1 Message Date
  looop5 e88cdc84ec enhancement reorder_ops pass to support reordering cast and type insensitive operators 4 years ago
  lingyunli63 c48c2430f0 fuse matmul and elementwise in graphkernel 4 years ago
  chenlei_autodiff 13fbfca6b9 [graph kernel] add expander ops. 4 years ago
  wenfangpei 66d28af79e adapt for layernorm in ascend 4 years ago
  mindspore-ci-bot b5bc938deb !12914 [GraphKernel]expander lamb_apply_weight_assign 4 years ago
  mindspore-ci-bot f324a9a760 !14553 [GraphKernel] refine cast matmul fusion 4 years ago
  lingyunli63 56390330ac cast_Matmul_fusion, when cast cannot fuse forward 4 years ago
  wenfangpei a4ad6066b1 expander lamb_apply_weight_assign 4 years ago
  mindspore-ci-bot 8634675e2d !14499 [GraphKernel]split UMonad in inputs of op 4 years ago
  wenfangpei 0085a273e7 split UMonad in inputs of op 4 years ago
  lingyunli63 8b3823b22c optimizeMatmul 4 years ago
  mindspore-ci-bot 69526df01e !14314 [GraphKernel] unify graph kernel pass add_atomic_clean on Ascend and GPU back-end 4 years ago
  mindspore-ci-bot ddf75da542 !14085 [GraphKernel] add some expander ops 4 years ago
  looop5 76d322464d unify graph kernel pass add_atomic_clean on Ascend and GPU back-end 4 years ago
  chenlei_autodiff f4289d40f3 add graph kernel expander ops. 4 years ago
  mindspore-ci-bot 7149e8c2c9 !14045 [Graph Kernel] add compare test case 4 years ago
  zengzitao 72c6dad4ba add compare_test case in gpu ci and update akg submodule 4 years ago
  lingyunli63 4b966ed40d support matmul on D 4 years ago
  mindspore-ci-bot 5b95409022 !13512 add some expander ops 4 years ago
  wenfangpei 043a558ae2 expander lamb_apply_optimizer_assign 4 years ago
  zengzitao d0a656f3cd add some expander ops 4 years ago
  zengzitao ef3507e973 fix exec order bug about monad 4 years ago
  He Wei 7d9a783993 [auto-monad] Support side-effects by auto-monad 4 years ago
  jinyaohui 30a27b2adb modify Gelu、FastGelu to GeLU and FastGeLU 4 years ago
  mindspore-ci-bot e897eb4c41 !11915 Change TensorAdd to Add, merge from r1.1 to master 4 years ago
  l00591931 9ec100d069 Change TensorAdd to Add, from r1.1 to master 4 years ago
  looop5 0161209e40 update submoudle akg, close graph kernel ascend ci testcases 4 years ago
  looop5 8bbe723603 add Tile infer shape function 5 years ago
  mindspore-ci-bot 5f2c84f3cb !9867 Add graph kernel testcases 5 years ago
  looop5 4d8205cd93 Delete unused interface in graph_kernels.py 5 years ago
  looop5 56fa56b173 add graph kernel testcases 5 years ago
  tronzhang 2190da9946 support atomic clean and change package for akg. 5 years ago
  zengzitao 3ef0e9f053 substitute dropout by cudnnuniformreal and dropout 5 years ago
  zengzitao 266bfa50bf expand logsoftmax and logsoftmax_grad, delete softmax's cast and fix layernorm op 5 years ago
  looop5 f5f66abd06 Add testcases in Ascend back-end for graph kernel 5 years ago
  zengzitao 326540cbbd expand layernorm_grad op 5 years ago
  zengzitao 28f1db74dd expand maximum_grad minimum_grad dropout_grad op 5 years ago
  zengzitao db27783d54 expand tanh_grad and reduce_mean, fix bug and add test_case in ci 5 years ago
  zengzitao 53043ae18f support expand fused_adam and fused_adam_weight_decay op 5 years ago
  dayschan 0f8f1cdda7 Eliminate redundant parameters while expanding basic ops. 5 years ago
  mindspore-ci-bot 8d39a8a4b2 !7529 complex arithmetic_simplify 5 years ago
  zhu_xiaochen c739f14038 simplify transpose matmul reduce 5 years ago
  lingyunli63 a500a57c72 add GraphkernelCSE 5 years ago
  Geng_Fei 1455372cf1 add new pass in graph kernel: arithmetic_simplify 5 years ago
  dayschan 37a48f6aac GraphKernel supports GPU 5 years ago
  duxiutao 793737ab62 add primitive operator to test_lamb 5 years ago
  duxiutao 1e43c609e0 Add test case and fix two bugs 5 years ago