50 Commits (8d00a8d803dba83322c19fc7aa60d81452f7b699)

Author SHA1 Message Date
  zhunaipan 3f4b1dddb5 fix some code spell errors in ops/operations 4 years ago
  wangshuide2020 185ddbbe66 remove the redundant code, add docstring of operator init and add default value for args. 4 years ago
  wangshuide2020 9877a491dc fix the format, extract function and del redundant code. 4 years ago
  wangshuide2020 3551ee5a27 delete duplication_code, and fix the format and line too long problem. 4 years ago
  Erpim 19c18eafba fix the problem that failed to obtain quant op info 4 years ago
  Erpim 90d5d5dab3 add lsq quantization method 5 years ago
  dinglinhe 865cf68243 All the descriptions of batch normal under the mindspore folder have been uniformly updated to 'Batch Normalization' 4 years ago
  lilei 43c0092d7f modifg_ops_note 5 years ago
  hedongodng a5016a0ead fix: I24U3E, I24U50, I24U7V and I24TZT. Correct operator descriptions 5 years ago
  mindspore-ci-bot 5b84ac6847 !8412 modify ActULQClampMinGrad/ActULQClampMaxGradrad return type 5 years ago
  zhangz0911gm 6fb2aa6023 Updating Notes for py files 5 years ago
  y00369862 8da4db3380 change validator.check_tensor_type_same to validator.check_tensor_dtype_valid 5 years ago
  y00369862 e4e5925dae modify act ulq min/max grad type 5 years ago
  liuxiao93 0a1155f938 Fix some bugs about API. 5 years ago
  mindspore-ci-bot a34a15cffc !8207 Add quant ops. 5 years ago
  liangchenghui 37ee246c70 Add quant ops. 5 years ago
  buxue 346bcfa3fd Rectify and optimize the type checking function 5 years ago
  yuchaojie 033e73ef12 add float16 check for gpu fakequant op 5 years ago
  jzg 374e9e199d add moment and nonzero. 5 years ago
  yuzhenhua 9734b93b00 export script for gcn and gat 5 years ago
  mindspore-ci-bot 7b060b2562 !7209 Add some fake-quant operators 5 years ago
  chenzomi acadb694aa [ME] delete reduant function in check_parameter 5 years ago
  jzg 2c6a9c8486 add fake-quant operators. 5 years ago
  mindspore-ci-bot c967bf6846 !7339 fix for se-resnet50 accurancy 5 years ago
  chenzomi cabb387545 [ME] change `check_lega_float_value` to `check_is_float` and add `check_is_int` 5 years ago
  chenzomi 79131e8da7 [ME] add float check function 5 years ago
  panfengfeng 2d7b93e958 fix nn & operations api comments 5 years ago
  simson 7cc48a9af8 Third round of enhancement of API comment & README_CN 5 years ago
  chenzomi ed2e84d4ed add LeakReLUQuant OP for bug fix. 5 years ago
  chenzomi 1089c908a9 cherry-pick r0.5 to master for quantizaiton aware training 5 years ago
  wangdongxu 02584fe2c7 fix perchannel num_channels not set bug and adjust quant.py params order 5 years ago
  chenzomi a834a6308e change some comment name in the whole project 5 years ago
  chenzomi bbce6faff9 remove _quant_ops.py from __init__.py 5 years ago
  mindspore-ci-bot 74c3e15675 !2194 fix FakeQuantPerLayer/FakeQuantPerLayerGrad symmetric=True calculation error bug 5 years ago
  wandongdong 86ba93629b split correction_mul op 5 years ago
  王东旭 4e09ae83eb fix FakeQuantPerLayer/FakeQuantPerLayerGrad symmetric bug 5 years ago
  zhaozhenlong dbfb4f057c move AscendQuant AscendDequant to inner_ops.py 5 years ago
  zhaozhenlong ccda0f7b36 add op AscendQuant AscendDequant 5 years ago
  chenzomi 4da1e21f45 bug fix in fake quant 5 years ago
  chenzomi 97a548789a bug fix in fake quant ops 5 years ago
  chenzomi b7db3e9a4b add fake quant per channel and bug fix 5 years ago
  mindspore-ci-bot 90eedfb351 !1784 fix bug in fake quant grad 5 years ago
  chenzomi e9a67efc6b fix bug in fake quant grad 5 years ago
  wangdongxu6 9eee157c58 fix some quant op bug 5 years ago
  王东旭 02e7237a91 fix some quant op bug 5 years ago
  wandongdong d35c41e737 add custom tbe ops for quant aware training 5 years ago
  zhouneng 3cc750fdce 为mindspore.ops.operations包和mindspore.nn包添加Examples 5 years ago
  chenzomi 2a695cfe24 fix some bug in quant debug 5 years ago
  fary86 6770c66ed9 Add prim name to error message for other operators left 6 years ago
  chenzomi d64f662c76 quantization aware training frontend operators define. 6 years ago