83 Commits (bc738d4ec3e73f4e25a3631f71b9833fe6830e47)

Author SHA1 Message Date
  mindspore-ci-bot 17764803ef !7648 [MSLITE] deconv winograd fp16 neon 5 years ago
  ling d8b928b7f8 [MSLITE] deconv winograd fp16 neon 5 years ago
  zhangxuetong 61178eaaf9 fix multipe definition of function 5 years ago
  ling 51fced3767 [MSLITE] deconv winograd fp16 neon 5 years ago
  mindspore-ci-bot 7747f4c471 !7457 [MSLITE] Support Fp32 Matrix-Vector Multiplication for FC/MATMUL Ops 5 years ago
  zhanyuan cccaab4fdc Support FP32 Matrix-Vector Multiplication 5 years ago
  ling 7d97c1b903 [MSLITE][Develop]deconv winograd input pack and output bias 5 years ago
  ling f6aa35b12b fp16 deconv winograd 5 years ago
  mindspore-ci-bot 57ebdb4545 !7393 [MSLITE] Add matrix-vector multiplication for fp16 fullconnection 5 years ago
  zhanyuan 2635dc0f97 Optimize fullconnection kernel for vector input 5 years ago
  zhaozhenlong 2219a9c80e fix issue quant dtype cast not support input fp16 5 years ago
  mindspore-ci-bot 55bcc0d7cf !7305 [MS][LITE][CPU] fp16 conv1x1 asm optimize 5 years ago
  liuzhongkai 73f3bf1176 fp16 conv1x1 init asm optimize 5 years ago
  yangruoqi713 43d0020564 [MSLITE][Develop] optimize arm cpu arithmetic: remove redundant code 5 years ago
  mindspore-ci-bot beb8bf5d65 !7059 [MS][LITE][Develop]fix fp16 matmul kernel write bug 5 years ago
  fuzhiye b3eea3af34 add fp16 selection func 5 years ago
  lixian d573a1180d fix fp16 matmul bug 5 years ago
  liuzhongkai f6f9d3915c fp16 winograd init optimize 5 years ago
  fuzhiye 1f9a122f17 replace gemm with matmul for fp16 conv winograd 5 years ago
  mindspore-ci-bot dc5adf0137 !7025 [MS][LITE][Develop]syc op_base act type with schema 5 years ago
  mindspore-ci-bot 6c3cfe3036 !7018 scale and stack fp16 5 years ago
  mengyuanli d56eb90044 syc op_base act type with schema 5 years ago
  zhaozhenlong 39c0ef10cb scale stack fp16 5 years ago
  mindspore-ci-bot d86df26db8 !7008 [MS][LITE][Develop]optimization for fp16 matmul kernel 5 years ago
  mindspore-ci-bot 668c0ac61e !6987 Add crop fp16 ops 5 years ago
  lixian 869bffe976 optimization for fp16 matmul kernel 5 years ago
  liuwenhao4 900dfe5cba Add crop fp16 ops 5 years ago
  mindspore-ci-bot caef8d2536 !6932 [MSLITE][Develop] Refactor slice and add fp16 kernel 5 years ago
  sunsuodong ef330cdffe Refactor slice and add fp16 kernel 5 years ago
  sunsuodong e23edeefca arithmetic_self_fp16 5 years ago
  yangruoqi713 26d1485819 [MSLITE][Develop] fix bug of arm fp16 cpu op: reduce, arithmetic 5 years ago
  sunsuodong a98b5a172a fix concat fp16 5 years ago
  fuzhiye 016f636a3c optimize fp16 common conv 5 years ago
  mindspore-ci-bot 41763fc8c2 !6864 [MSLITE][Develop] fp16 conv1x1 parallel by hw 5 years ago
  fuzhiye f57b55014a optimize fp16 winograd 5 years ago
  ling d50b324456 [MSLITE][Develop] fp16 conv1x1 parallel by hw 5 years ago
  zhanyuan 23d9ff9113 Use local max input value instead of the global one for fp32 & fp16 softmax 5 years ago
  fuzhiye 772adb84d7 add activation fusion for fp16 pooling 5 years ago
  fuzhiye 06142a330b optimize fp16 common conv preprocess 5 years ago
  mindspore-ci-bot b96a0bb80b !6567 [MS][LITE][CPU] code clean 5 years ago
  liuzhongkai 37567147bd code clean 5 years ago
  zhanyuan 79afc972bb Fix the bug of reading FC's tensor shape 5 years ago
  mindspore-ci-bot f909f7d02c !6426 [MSLITE][Develop] deconv arm32 fp32 bug 5 years ago
  ling d257fec69c [MSLITE][Develop] fix arm32 deconv bug 5 years ago
  fuzhiye a2c8ce4910 solve average pooling avgMode problem 5 years ago
  sunsuodong 5c97d0fb3d fix reduce relu6 5 years ago
  mindspore-ci-bot 88dfcda3e6 !6262 optimize cpu op div 5 years ago
  sunsuodong f3157c9ead Fix relu fp16 multiple threads problem 5 years ago
  tao_yunhao 28ce5c64f9 optimize cpu op div 5 years ago
  lixian 83efc49b12 add matrix transpose for fp16 5 years ago