1043 Commits (12f95b51f445ef50192ae19f084db2d49e2767e5)

Author SHA1 Message Date
  mindspore-ci-bot 74ca49529a !14732 Add L2NormalizeGrad for CPU 4 years ago
  mindspore-ci-bot 789c28d4d8 !14728 add l2normalize ops for cpu 4 years ago
  shibeiji 5cfab3b1a8 fix code review alarms 4 years ago
  yuanwei66 33870dc46d add l2normalize ops for cpu 4 years ago
  hebotao a7b445c50c Add L2NormalizeGradCPUKernel 4 years ago
  mindspore-ci-bot 7ae11f91e6 !14791 add ascend lstm test case 4 years ago
  mindspore-ci-bot 5b4685c5ea !14604 [GraphKernel] add some expander ops 4 years ago
  mindspore-ci-bot c907c95da5 !14849 fix codedex and bot 4 years ago
  chenlei_autodiff 13fbfca6b9 [graph kernel] add expander ops. 4 years ago
  mindspore-ci-bot 0920239699 !13475 [GraphKernel]adapt for layernorm in ascend 4 years ago
  fangzehua 742aa799b0 fix codedex and bot 4 years ago
  wenfangpei 66d28af79e adapt for layernorm in ascend 4 years ago
  woshixiaoli 3f633348e2 add lstm test case 4 years ago
  majianwei 16932e468e Completion of test cases 4 years ago
  mindspore-ci-bot b5bc938deb !12914 [GraphKernel]expander lamb_apply_weight_assign 4 years ago
  mindspore-ci-bot f324a9a760 !14553 [GraphKernel] refine cast matmul fusion 4 years ago
  lingyunli63 56390330ac cast_Matmul_fusion, when cast cannot fuse forward 4 years ago
  wenfangpei a4ad6066b1 expander lamb_apply_weight_assign 4 years ago
  mindspore-ci-bot 8634675e2d !14499 [GraphKernel]split UMonad in inputs of op 4 years ago
  wenfangpei 0085a273e7 split UMonad in inputs of op 4 years ago
  mindspore-ci-bot 18d79d35b6 !14498 [GraphKernel]remove redundant cast bias for matmul 4 years ago
  lingyunli63 8b3823b22c optimizeMatmul 4 years ago
  mindspore-ci-bot b8e35c663f !14508 add float64 support to SigmoidCrossEntropyWithLogits gpu 4 years ago
  mindspore-ci-bot efb53fb9c0 !14183 Support SparseTensorDenseMatmul for CPU 4 years ago
  TFBunny 4de6b25d23 add float64 support to SigmoidCrossEntropyWithLogits and Grad 4 years ago
  xuguoyang 7df6bfe7dd support sparse tensor dense matmul fot CPU 4 years ago
  mindspore-ci-bot 69526df01e !14314 [GraphKernel] unify graph kernel pass add_atomic_clean on Ascend and GPU back-end 4 years ago
  mindspore-ci-bot ddf75da542 !14085 [GraphKernel] add some expander ops 4 years ago
  looop5 76d322464d unify graph kernel pass add_atomic_clean on Ascend and GPU back-end 4 years ago
  chenlei_autodiff f4289d40f3 add graph kernel expander ops. 4 years ago
  mindspore-ci-bot 7149e8c2c9 !14045 [Graph Kernel] add compare test case 4 years ago
  zengzitao 72c6dad4ba add compare_test case in gpu ci and update akg submodule 4 years ago
  mindspore-ci-bot ad140a8bf4 !14084 [GraphKernel] support matmul on D 4 years ago
  lingyunli63 4b966ed40d support matmul on D 4 years ago
  mindspore-ci-bot f1fb0d9f3a !13833 Add SparseToDense 4 years ago
  mindspore-ci-bot d2ecf71ace !13693 Slice op only support input_x int32 and float32 at CPU backend 4 years ago
  zhengqihao 27f508760b Add SparseToDense op 4 years ago
  w00535372 761a2e2127 Bug fix for ISSUE #I3CN9Q 4 years ago
  wangyanling fb64e14265 fix slice op bug 4 years ago
  zhuyuxiao a11287c332 adagrad: support ouput on gpu 4 years ago
  mindspore-ci-bot 38b48fb0e8 !13608 Add CPU LogSoftMax 4 years ago
  mindspore-ci-bot 5504116718 !13647 BroadcastTo add general -1 dim behavior 4 years ago
  zhaoting 8754aaeb74 add CPU LogSoftMax 4 years ago
  mindspore-ci-bot 7bb35d8ce4 !13635 add float64 support to Addn gpu 4 years ago
  Peilin Wang 6cead43bdf add general -1 dim behavior for BroadcastTo op 4 years ago
  wangyanling 6268f660fb add cpu broadcast_to op 4 years ago
  TFBunny b780e5737c add float64 to Addn gpu 4 years ago
  mindspore-ci-bot 5b95409022 !13512 add some expander ops 4 years ago
  mindspore-ci-bot 2fadad0875 !13121 expander lamb_apply_optimizer_assign 4 years ago
  mindspore-ci-bot 8e8f3043f9 !12115 IR operators of GPU and CPU are unified as batchnorm 4 years ago