You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

quant.py 82 kB

5 years ago
optimize the comment and log description 修改: ops/operations/_inner_ops.py 修改: ops/operations/_quant_ops.py 修改: ops/operations/array_ops.py 修改: ops/operations/comm_ops.py 修改: ops/operations/math_ops.py 修改: ops/operations/quantum_ops.py 修改: ops/operations/rl_ops.py 修改: ops/operations/sponge_ops.py 修改: ops/operations/sponge_update_ops.py 修改: train/__init__.py 修改: common/tensor.py 修改: train/serialization.py 修改: ccsrc/pipeline/jit/parse/parse.h 修改: explainer/benchmark/_attribution/metric.py 修改: ops/composite/multitype_ops/_constexpr_utils.py 修改: ops/operations/comm_ops.py 修改: RELEASE.md 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/concat_offset_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/dynamic_shape_cpu_kernel.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/reshape_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/tile_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/transpose_info.cc 修改: mindspore/ccsrc/frontend/parallel/strategy.h 修改: mindspore/common/tensor.py 修改: mindspore/core/abstract/prim_arrays.cc 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/core/ops/logical_and.h 修改: mindspore/core/ops/logical_not.h 修改: mindspore/core/ops/logical_or.h 修改: mindspore/core/ops/reduce_all.h 修改: mindspore/core/ops/reduce_any.h 修改: mindspore/lite/src/runtime/kernel/arm/fp32_grad/sgd.cc 修改: mindspore/nn/layer/quant.py 修改: mindspore/nn/optim/sgd.py 修改: mindspore/nn/sparse/sparse.py 修改: mindspore/numpy/array_creations.py 修改: mindspore/numpy/array_ops.py 修改: mindspore/numpy/logic_ops.py 修改: mindspore/numpy/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/train/_utils.py 修改: tests/ut/python/model/test_lenet_core_after_exception.py 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ctcloss_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_pull_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_push_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_filter_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_input_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_ftrl_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_lazy_adam_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/rolling_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/scatter_arithmetic_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/arrays/split_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/math/broadcast_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/nn/conv2d_grad_input_gpu_kernel.h 修改: mindspore/ccsrc/fl/server/server.cc 修改: mindspore/ccsrc/frontend/optimizer/ad/kpynative.cc 修改: mindspore/ccsrc/frontend/optimizer/irpass/incorporate_getitem.h 修改: mindspore/ccsrc/frontend/optimizer/irpass/inline.h 修改: mindspore/ccsrc/minddata/dataset/core/device_tensor.cc 修改: mindspore/ccsrc/minddata/dataset/core/tensor.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/emnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/mnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/qmnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/ir/datasetops/dataset_node.cc 修改: mindspore/ccsrc/minddata/dataset/engine/opt/pre/epoch_ctrl_pass.cc 修改: mindspore/ccsrc/minddata/dataset/kernels/image/lite_image_utils.cc 修改: mindspore/ccsrc/pipeline/jit/action.cc 修改: mindspore/ccsrc/pipeline/jit/static_analysis/evaluator.cc 修改: mindspore/ccsrc/runtime/device/ascend/executor/tiling/op_tiling_adapter.cc 修改: mindspore/compression/quant/quant_utils.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/dataset/engine/validators.py 修改: mindspore/lite/micro/coder/opcoders/nnacl/fp32/affine_fp32_coder.cc 修改: mindspore/lite/micro/coder/opcoders/nnacl/int8/affine_int8_coder.cc 修改: mindspore/lite/src/runtime/kernel/ascend310/src/custom_kernel.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/matmul.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/strassen.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/lite/tools/optimizer/fisson/fisson_util.cc 修改: mindspore/ops/composite/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/math_ops.py 修改: mindspore/ops/operations/other_ops.py 修改: mindspore/boost/boost_cell_wrapper.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/common/trans.cc 修改: mindspore/ccsrc/frontend/parallel/cache_embedding/cache_embedding.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/lite/src/common/log_util.h 修改: mindspore/nn/wrap/loss_scale.py 修改: mindspore/parallel/nn/moe.py 修改: tests/mindspore_test_framework/mindspore_test.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/core/ops/conv2d.cc 修改: tests/ut/python/model/test_lenet_core_after_exception.py
4 years ago
5 years ago
5 years ago
5 years ago
4 years ago
4 years ago
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
4 years ago
5 years ago
5 years ago
5 years ago
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
4 years ago
5 years ago
5 years ago
5 years ago
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296129712981299130013011302130313041305130613071308130913101311131213131314131513161317131813191320132113221323132413251326132713281329133013311332133313341335133613371338133913401341134213431344134513461347134813491350135113521353135413551356135713581359136013611362136313641365136613671368136913701371137213731374137513761377137813791380138113821383138413851386138713881389139013911392139313941395139613971398139914001401140214031404140514061407140814091410141114121413141414151416141714181419142014211422142314241425142614271428142914301431143214331434143514361437143814391440144114421443144414451446144714481449145014511452145314541455145614571458145914601461146214631464146514661467146814691470147114721473147414751476147714781479148014811482148314841485148614871488148914901491149214931494149514961497149814991500150115021503150415051506150715081509151015111512151315141515151615171518151915201521152215231524152515261527152815291530153115321533153415351536153715381539154015411542154315441545154615471548154915501551155215531554155515561557155815591560156115621563156415651566156715681569157015711572157315741575157615771578157915801581158215831584158515861587158815891590159115921593159415951596159715981599160016011602160316041605160616071608160916101611161216131614161516161617161816191620162116221623162416251626162716281629163016311632163316341635163616371638163916401641164216431644164516461647164816491650165116521653165416551656165716581659166016611662166316641665166616671668166916701671167216731674167516761677167816791680168116821683168416851686168716881689169016911692169316941695169616971698169917001701170217031704170517061707170817091710171117121713
  1. # Copyright 2021 Huawei Technologies Co., Ltd
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. # ============================================================================
  15. """Quantization aware training."""
  16. from functools import partial
  17. from collections import namedtuple
  18. import numpy as np
  19. import mindspore.common.dtype as mstype
  20. from mindspore.ops.primitive import Primitive
  21. from mindspore.ops import operations as P
  22. from mindspore.common.parameter import Parameter
  23. from mindspore.common.initializer import initializer
  24. from mindspore.common.tensor import Tensor
  25. from mindspore._checkparam import Validator, twice
  26. from mindspore.compression.common import QuantDtype
  27. import mindspore.context as context
  28. from .normalization import BatchNorm2d
  29. from .activation import get_activation
  30. from ..cell import Cell
  31. from ... import nn
  32. from ...ops.operations import _quant_ops as Q
  33. __all__ = [
  34. 'FakeQuantWithMinMaxObserver',
  35. 'Conv2dBnFoldQuantOneConv',
  36. 'Conv2dBnFoldQuant',
  37. 'Conv2dBnWithoutFoldQuant',
  38. 'Conv2dQuant',
  39. 'DenseQuant',
  40. 'ActQuant',
  41. 'TensorAddQuant',
  42. 'MulQuant',
  43. ]
  44. class BatchNormFoldCell(Cell):
  45. """
  46. Batch Normalization folded.
  47. Args:
  48. momentum (float): Momentum value must be [0, 1]. Default: 0.9.
  49. epsilon (float): A small float number to avoid dividing by 0. 1e-5 if dtype in
  50. float32 else 1e-3. Default: 1e-5.
  51. freeze_bn (int): Delay in steps at which computation switches from regular batch
  52. norm to frozen mean and std. Default: 0.
  53. Inputs:
  54. - **x** (Tensor) - Tensor of shape :math:`(N, C, H, W)`.
  55. - **mean** (Tensor) - Tensor of shape :math:`(C,)`.
  56. - **variance** (Tensor) - Tensor of shape :math:`(C,)`.
  57. - **global_step** (Tensor) - Tensor to record current global step.
  58. Outputs:
  59. Tuple of 4 Tensor, the normalized input and the updated parameters.
  60. - **batch_mean** (Tensor) - Tensor of shape :math:`(C,)`.
  61. - **batch_std** (Tensor) - Tensor of shape :math:`(C,)`.
  62. - **running_mean** (Tensor) - Tensor of shape :math:`(C,)`.
  63. - **running_std** (Tensor) - Tensor of shape :math:`(C,)`.
  64. """
  65. def __init__(self, momentum=0.9, epsilon=1e-5, freeze_bn=0):
  66. """Initialize batch norm fold layer"""
  67. super(BatchNormFoldCell, self).__init__()
  68. self.epsilon = epsilon
  69. self.is_gpu = context.get_context('device_target') == "GPU"
  70. if self.is_gpu:
  71. self.bn_train = Q.BatchNormFold(momentum, epsilon, is_training=True, freeze_bn=freeze_bn)
  72. self.bn_infer = Q.BatchNormFold(momentum, epsilon, is_training=False, freeze_bn=freeze_bn)
  73. else:
  74. self.bn_reduce = P.BNTrainingReduce()
  75. self.bn_update = Q.BatchNormFoldD(momentum, epsilon, is_training=True, freeze_bn=freeze_bn)
  76. def construct(self, x, mean, variance, global_step):
  77. if self.is_gpu:
  78. if self.training:
  79. batch_mean, batch_std, running_mean, running_std = self.bn_train(x, mean, variance, global_step)
  80. else:
  81. batch_mean, batch_std, running_mean, running_std = self.bn_infer(x, mean, variance, global_step)
  82. else:
  83. if self.training:
  84. x_sum, x_square_sum = self.bn_reduce(x)
  85. _, batch_mean, batch_std, running_mean, running_std, mean_updated, variance_updated = \
  86. self.bn_update(x, x_sum, x_square_sum, mean, variance)
  87. P.Assign()(mean, mean_updated)
  88. P.Assign()(variance, variance_updated)
  89. else:
  90. batch_mean = P.ZerosLike()(variance)
  91. batch_std = P.OnesLike()(variance)
  92. running_mean = P.Add()(mean, 0.)
  93. running_std = P.Sqrt()(P.Add()(variance, self.epsilon))
  94. return batch_mean, batch_std, running_mean, running_std
  95. def _partial_init(cls_or_self, **kwargs):
  96. """
  97. Wrapper that allows creation of class factories.
  98. This can be useful when there is a need to create classes with the same
  99. constructor arguments, but different instances.
  100. Examples:
  101. >>> class Foo:
  102. ... def __init__(self, a, b, answer):
  103. ... pass
  104. >>> Foo.partial_init = classmethod(_partial_init)
  105. >>> foo_builder = Foo.partial_init(a=3, b=4).partial_init(answer=42)
  106. >>> foo_instance1 = foo_builder()
  107. >>> foo_instance2 = foo_builder()
  108. >>> result = (id(foo_instance1) == id(foo_instance2))
  109. >>> print(result)
  110. False
  111. """
  112. class _PartialWrapper:
  113. r"""
  114. class of wrapper that allows creation of class factories.
  115. """
  116. def __init__(self, p):
  117. self.p = p
  118. def __call__(self, *args, **keywords):
  119. return self.p(*args, **keywords)
  120. def __repr__(self):
  121. return self.p.__repr__()
  122. partial_init = _partial_init
  123. r = _PartialWrapper(partial(cls_or_self, **kwargs))
  124. return r
  125. class _Observer(Cell):
  126. """
  127. Base class of Observer. Observer is used to calculate the statistics of specific layer.
  128. Notes:
  129. This class is an abstract class.
  130. Args:
  131. quant_dtype (QuantDtype): The type of FakeQuant data.
  132. """
  133. def __init__(self, quant_dtype):
  134. """Initialize _Observer."""
  135. super(_Observer, self).__init__()
  136. self.quant_dtype = quant_dtype
  137. def extend_repr(self):
  138. s = f"quant_dtype={self.quant_dtype}"
  139. return s
  140. def construct(self):
  141. pass
  142. partial_init = classmethod(_partial_init)
  143. class UniformQuantObserver(_Observer):
  144. """
  145. The base class of Uniform Quantization Observer.
  146. Args:
  147. quant_dtype (QuantDtype): The type of FakeQuant data. Default: QuantDtype.INT8.
  148. per_channel (bool): Quantization granularity based on layer or on channel. Default: False.
  149. symmetric (bool): Whether the quantization algorithm is symmetric or not. Default: False.
  150. narrow_range (bool): Whether the quantization algorithm uses narrow range or not. Default: False.
  151. num_channels (int): declarate the min and max channel size, Default: 1.
  152. Returns:
  153. Tensor.
  154. """
  155. min_max_map = {
  156. QuantDtype.INT2: (-2, 1),
  157. QuantDtype.INT3: (-4, 3),
  158. QuantDtype.INT4: (-8, 7),
  159. QuantDtype.INT5: (-16, 15),
  160. QuantDtype.INT6: (-32, 31),
  161. QuantDtype.INT7: (-64, 63),
  162. QuantDtype.INT8: (-128, 127),
  163. QuantDtype.UINT2: (0, 3),
  164. QuantDtype.UINT3: (0, 7),
  165. QuantDtype.UINT4: (0, 15),
  166. QuantDtype.UINT5: (0, 31),
  167. QuantDtype.UINT6: (0, 63),
  168. QuantDtype.UINT7: (0, 127),
  169. QuantDtype.UINT8: (0, 255)
  170. }
  171. def __init__(self, quant_dtype=QuantDtype.INT8, per_channel=False, symmetric=False, narrow_range=False,
  172. num_channels=1):
  173. """Initialize UniformQuantObserver."""
  174. super(UniformQuantObserver, self).__init__(quant_dtype)
  175. self.per_channel = per_channel
  176. self.symmetric = symmetric
  177. self.narrow_range = narrow_range
  178. self.num_channels = num_channels
  179. class FakeQuantWithMinMaxObserver(UniformQuantObserver):
  180. r"""
  181. Quantization aware operation which provides the fake quantization observer function on data with min and max.
  182. The detail of the quantization mode `DEFAULT` is described as below:
  183. The running min/max :math:`x_{min}` and :math:`x_{max}` are computed as:
  184. .. math::
  185. \begin{array}{ll} \\
  186. x_{min} =
  187. \begin{cases}
  188. \min(\min(X), 0)
  189. & \text{ if } ema = \text{False} \\
  190. \min((1 - c) \min(X) + \text{c } x_{min}, 0)
  191. & \text{ if } \text{otherwise}
  192. \end{cases}\\
  193. x_{max} =
  194. \begin{cases}
  195. \max(\max(X), 0)
  196. & \text{ if } ema = \text{False} \\
  197. \max((1 - c) \max(X) + \text{c } x_{max}, 0)
  198. & \text{ if } \text{otherwise}
  199. \end{cases}
  200. \end{array}
  201. where X is the input tensor, and :math:`c` is the `ema_decay`.
  202. The scale and zero point zp is computed as:
  203. .. math::
  204. \begin{array}{ll} \\
  205. scale =
  206. \begin{cases}
  207. \frac{x_{max} - x_{min}}{Q_{max} - Q_{min}}
  208. & \text{ if } symmetric = \text{False} \\
  209. \frac{2\max(x_{max}, \left | x_{min} \right |) }{Q_{max} - Q_{min}}
  210. & \text{ if } \text{otherwise}
  211. \end{cases}\\
  212. zp\_min = Q_{min} - \frac{x_{min}}{scale} \\
  213. zp = \left \lfloor \min(Q_{max}, \max(Q_{min}, zp\_min)) + 0.5 \right \rfloor
  214. \end{array}
  215. where :math:`Q_{max}` and :math:`Q_{min}` is decided by quant_dtype, for example, if quant_dtype=INT8,
  216. then :math:`Q_{max} = 127` and :math:`Q_{min} = -128`.
  217. The fake quant output is computed as:
  218. .. math::
  219. \begin{array}{ll} \\
  220. u_{min} = (Q_{min} - zp) * scale \\
  221. u_{max} = (Q_{max} - zp) * scale \\
  222. u_X = \left \lfloor \frac{\min(u_{max}, \max(u_{min}, X)) - u_{min}}{scale}
  223. + 0.5 \right \rfloor \\
  224. output = u_X * scale + u_{min}
  225. \end{array}
  226. The detail of the quantization mode `LEARNED_SCALE` is described as below:
  227. The fake quant output is computed as:
  228. .. math::
  229. \bar{X}=\left\{\begin{matrix}
  230. clip\left ( \frac{X}{maxq},0,1\right ) \qquad \quad if\quad neg\_trunc\\
  231. clip\left ( \frac{X}{maxq},-1,1\right )\qquad \ if\quad otherwise
  232. \end{matrix}\right. \\
  233. output=\frac{floor\left ( \bar{X}\ast Q_{max}+0.5 \right ) \ast scale }{Q_{max}}
  234. where X is the input tensor.
  235. where :math:`Q_{max}` (quant_max) is decided by quant_dtype and neg_trunc, for example, if quant_dtype=INT8
  236. and neg_trunc works, :math:`Q_{max} = 256` , otherwise math:`Q_{max} = 127`.
  237. The maxq is updated by training, and its gradient is calculated as follows:
  238. .. math::
  239. \frac{\partial \ output}{\partial \ maxq} = \left\{\begin{matrix}
  240. -\frac{X}{maxq}+\left \lfloor \frac{X}{maxq} \right \rceil \qquad if\quad bound_{lower}< \frac{X}{maxq}< 1\\
  241. -1 \qquad \quad \qquad \quad if\quad \frac{X}{maxq}\le bound_{lower}\\
  242. 1 \qquad \quad \qquad \quad if\quad \frac{X}{maxq}\ge 1 \qquad \quad
  243. \end{matrix}\right. \\
  244. bound_{lower}=
  245. \left\{\begin{matrix}
  246. 0\qquad \quad if\quad neg\_trunc\\
  247. -1\qquad if\quad otherwise
  248. \end{matrix}\right.
  249. Then minq is computed as:
  250. .. math::
  251. minq=\left\{\begin{matrix}
  252. 0 \qquad \qquad \quad if\quad neg\_trunc\\
  253. -maxq\qquad if\quad otherwise
  254. \end{matrix}\right.
  255. When exporting, the scale and zero point zp is computed as:
  256. .. math::
  257. scale=\frac{maxq}{quant\_max} ,\quad zp=0 \\
  258. zp is equal to 0 consistently, due to the LEARNED_SCALE`s symmetric nature.
  259. Args:
  260. min_init (int, float, list): The initialized min value. Default: -6.
  261. max_init (int, float, list): The initialized max value. Default: 6.
  262. ema (bool): The exponential Moving Average algorithm updates min and max. Default: False.
  263. ema_decay (float): Exponential Moving Average algorithm parameter. Default: 0.999.
  264. per_channel (bool): Quantization granularity based on layer or on channel. Default: False.
  265. channel_axis (int): Quantization by channel axis. Default: 1.
  266. num_channels (int): declarate the min and max channel size, Default: 1.
  267. quant_dtype (QuantDtype): The datatype of quantization, supporting 4 and 8bits. Default: QuantDtype.INT8.
  268. symmetric (bool): Whether the quantization algorithm is symmetric or not. Default: False.
  269. narrow_range (bool): Whether the quantization algorithm uses narrow range or not. Default: False.
  270. quant_delay (int): Quantization delay parameters according to the global step. Default: 0.
  271. neg_trunc (bool): Whether the quantization algorithm uses negative truncation or not. Default: False.
  272. mode (str): Optional quantization mode, currently only `DEFAULT`(QAT) and `LEARNED_SCALE` are supported.
  273. Default: ("DEFAULT")
  274. Inputs:
  275. - **x** (Tensor) - The input of FakeQuantWithMinMaxObserver. The input dimension is preferably 2D or 4D.
  276. Outputs:
  277. Tensor, with the same type and shape as the `x`.
  278. Raises:
  279. TypeError: If `min_init` or `max_init` is not int, float or list.
  280. TypeError: If `quant_delay` is not an int.
  281. ValueError: If `quant_delay` is less than 0.
  282. ValueError: If `min_init` is not less than `max_init`.
  283. ValueError: If `mode` is neither `DEFAULT` nor `LEARNED_SCALE`.
  284. ValueError: If `mode` is `LEARNED_SCALE` and `symmetric` is not `True`.
  285. ValueError: If `mode` is `LEARNED_SCALE`, and `narrow_range` is not `True` unless when `neg_trunc` is `True`.
  286. Supported Platforms:
  287. ``Ascend`` ``GPU``
  288. Examples:
  289. >>> import mindspore
  290. >>> from mindspore import Tensor
  291. >>> fake_quant = nn.FakeQuantWithMinMaxObserver()
  292. >>> x = Tensor(np.array([[1, 2, 1], [-2, 0, -1]]), mindspore.float32)
  293. >>> result = fake_quant(x)
  294. >>> print(result)
  295. [[ 0.9882355 1.9764705 0.9882355]
  296. [-1.9764705 0. -0.9882355]]
  297. """
  298. def __init__(self,
  299. min_init=-6,
  300. max_init=6,
  301. ema=False,
  302. ema_decay=0.999,
  303. per_channel=False,
  304. channel_axis=1,
  305. num_channels=1,
  306. quant_dtype=QuantDtype.INT8,
  307. symmetric=False,
  308. narrow_range=False,
  309. quant_delay=0,
  310. neg_trunc=False,
  311. mode="DEFAULT"):
  312. """Initialize FakeQuantWithMinMaxObserver"""
  313. super(FakeQuantWithMinMaxObserver, self).__init__(quant_dtype=quant_dtype, per_channel=per_channel,
  314. symmetric=symmetric, narrow_range=narrow_range,
  315. num_channels=num_channels)
  316. Validator.check_value_type("min_init", min_init, [int, float, list], type(self).__name__)
  317. Validator.check_value_type("max_init", max_init, [int, float, list], type(self).__name__)
  318. Validator.check_non_negative_int(quant_delay, 'quant_delay', self.cls_name)
  319. self.min_init = min_init
  320. self.max_init = max_init
  321. self.quant_dtype = quant_dtype
  322. self.num_bits = quant_dtype.num_bits
  323. self.ema = ema
  324. self.ema_decay = ema_decay
  325. self.per_channel = per_channel
  326. self.num_channels = num_channels
  327. self.channel_axis = channel_axis
  328. self.quant_delay = quant_delay
  329. self.symmetric = symmetric
  330. self.narrow_range = narrow_range
  331. self.neg_trunc = neg_trunc
  332. self.mode = mode
  333. self.is_ascend = context.get_context('device_target') == "Ascend"
  334. self.Neg = P.Neg()
  335. min_array = self._get_init_array(self.min_init)
  336. max_array = self._get_init_array(self.max_init)
  337. if not np.greater(max_array, min_array).all():
  338. raise ValueError(f"For '{self.cls_name}', the 'max_init' should be greater than 'min_init', "
  339. f"but got 'max_init': {max_init}, 'min_init': {min_init}.")
  340. if self.mode == "DEFAULT":
  341. self._default_init(min_array, max_array)
  342. elif self.mode == "LEARNED_SCALE":
  343. self._learned_scale_init(min_array, max_array)
  344. else:
  345. raise ValueError(f"For '{self.cls_name}', only `DEFAULT` and `LEARNED_SCALE` mode are valid, but got "
  346. f"'mode': {self.mode}.")
  347. def reset(self, quant_dtype=QuantDtype.INT8, min_init=-6, max_init=6):
  348. r"""
  349. Reset the quant max parameter (eg. 256) and the initial value of the minq parameter and maxq parameter,
  350. this function is currently only valid for `LEARNED_SCALE` mode.
  351. Args:
  352. quant_dtype (QuantDtype): The datatype of quantization, supporting 4 and 8bits. Default: QuantDtype.INT8.
  353. min_init (int, float, list): The initialized min value. Default: -6.
  354. max_init (int, float, list): The initialized max value. Default: 6.
  355. """
  356. if self.mode == "LEARNED_SCALE":
  357. self.quant_dtype = quant_dtype
  358. self.num_bits = quant_dtype.num_bits
  359. self._calculate_quant_max()
  360. if self.neg_trunc:
  361. min_init = 0
  362. self.min_init = min_init
  363. self.max_init = max_init
  364. min_array = self._get_init_array(self.min_init)
  365. max_array = self._get_init_array(self.max_init)
  366. if not np.greater(max_array, min_array).all():
  367. raise ValueError(f"For '{self.cls_name}', the 'max_init' should be greater than 'min_init', "
  368. f"but got 'max_init': {max_init}, 'min_init': {min_init}.")
  369. self.minq.set_data(Tensor(min_array))
  370. self.maxq.set_data(Tensor(max_array))
  371. self.quant_max.set_data(Tensor(np.array([self._quant_max]).astype(np.float32)))
  372. else:
  373. raise ValueError(f"For '{self.cls_name}', only `LEARNED_SCALE` mode is valid, but got 'mode': {self.mode}.")
  374. def _default_init(self, min_array, max_array):
  375. """
  376. Initialization of `DEFAULT`(QAT) mode.
  377. """
  378. # init tensor min and max for fake quantized operation
  379. self.minq = Parameter(Tensor(min_array), name='quant_min', requires_grad=False)
  380. self.maxq = Parameter(Tensor(max_array), name='quant_max', requires_grad=False)
  381. # init fake quant relative op
  382. if self.per_channel:
  383. quant_fun = partial(Q.FakeQuantPerChannel, channel_axis=self.channel_axis)
  384. ema_fun = partial(Q.MinMaxUpdatePerChannel, channel_axis=self.channel_axis)
  385. else:
  386. quant_fun = Q.FakeQuantPerLayer
  387. ema_fun = Q.MinMaxUpdatePerLayer
  388. self.ema_update = ema_fun(ema=self.ema, ema_decay=self.ema_decay)
  389. if self.is_ascend:
  390. self.fake_quant_train = quant_fun(num_bits=self.quant_dtype.num_bits,
  391. symmetric=self.symmetric,
  392. narrow_range=self.narrow_range,
  393. quant_delay=self.quant_delay)
  394. self.fake_quant_infer = self.fake_quant_train
  395. else:
  396. quant_fun = partial(quant_fun,
  397. ema=self.ema,
  398. ema_decay=self.ema_decay,
  399. num_bits=self.quant_dtype.num_bits,
  400. symmetric=self.symmetric,
  401. narrow_range=self.narrow_range,
  402. quant_delay=self.quant_delay)
  403. self.fake_quant_train = quant_fun(training=True)
  404. self.fake_quant_infer = quant_fun(training=False)
  405. def _learned_scale_init(self, min_array, max_array):
  406. """
  407. Initialization of `LEARNED_SCALE` mode.
  408. """
  409. if not self.symmetric:
  410. raise ValueError(f"For '{self.cls_name}', the 'LEARNED_SCALE' mode only support 'symmetric' quant, "
  411. f"but got 'symmetric': {self.symmetric}. Please set 'symmetric' to True.")
  412. if self.neg_trunc:
  413. min_array = self._get_init_array(0)
  414. if self.narrow_range:
  415. raise ValueError(f"For '{self.cls_name}', the 'LEARNED_SCALE' mode only support the combination of "
  416. f"'neg_trunc=True and narrow_range=False' config scenario, but got 'narrow_range': "
  417. f"{self.narrow_range}.")
  418. elif not self.narrow_range:
  419. raise ValueError(f"For '{self.cls_name}', the 'LEARNED_SCALE' mode only support 'narrow_range=True' "
  420. f"config, except for 'neg_trunc=True' scenario. But got 'narrow_range': "
  421. f"{self.narrow_range}.")
  422. self._calculate_quant_max()
  423. self.minq = Parameter(Tensor(min_array), name='minq')
  424. self.maxq = Parameter(Tensor(max_array), name='maxq')
  425. self.quant_max = Parameter(Tensor(np.array([self._quant_max]).astype(np.float32)),
  426. name="quant_max", requires_grad=False)
  427. # init fake quant relative op
  428. if self.per_channel:
  429. quant_fun = partial(Q.FakeLearnedScaleQuantPerChannel, channel_axis=self.channel_axis)
  430. else:
  431. quant_fun = Q.FakeLearnedScaleQuantPerLayer
  432. quant_fun = partial(quant_fun,
  433. quant_delay=self.quant_delay,
  434. neg_trunc=self.neg_trunc)
  435. self.fake_quant_train = quant_fun(training=True)
  436. self.fake_quant_infer = quant_fun(training=False)
  437. def _get_init_array(self, init_date):
  438. """
  439. Convert the initial value to array.
  440. """
  441. if isinstance(init_date, list) and self.per_channel and len(init_date) != self.num_channels:
  442. raise ValueError(f"For '{self.cls_name}', the length of 'min_init/max_init' list should be equal to "
  443. f"'num_channels' for perchannel quant scenario, but got 'min_init/max_init': {init_date} "
  444. f"and num_channels: {self.num_channels}.")
  445. if isinstance(init_date, list) and not self.per_channel and len(init_date) != 1:
  446. raise ValueError(f"For '{self.cls_name}', the length of the 'min_init/max_init' list should be 1 for "
  447. f"perlayer quant scenario, but got {len(init_date)}.")
  448. if isinstance(init_date, list):
  449. min_max_array = np.array(init_date).astype(np.float32)
  450. elif self.per_channel and not isinstance(init_date, list):
  451. min_max_array = np.array([init_date] * self.num_channels).astype(np.float32)
  452. else:
  453. min_max_array = np.array([init_date]).astype(np.float32)
  454. return min_max_array
  455. def _calculate_quant_max(self):
  456. """
  457. The quantization range is calculated according to num_bits.
  458. """
  459. if not self.neg_trunc:
  460. self._quant_max = (1 << (self.num_bits - 1)) - 1
  461. else:
  462. self._quant_max = (1 << self.num_bits) - 1
  463. def extend_repr(self):
  464. """Display instance object as string."""
  465. s = 'quant_dtype={}, symmetric={}, narrow_range={}, ema={}({}), per_channel={}({}, {}), ' \
  466. 'quant_delay={}, min_init={}, max_init={}'.format(self.quant_dtype, self.symmetric, self.narrow_range,
  467. self.ema, self.ema_decay, self.per_channel,
  468. self.channel_axis, self.num_channels, self.quant_delay,
  469. self.min_init, self.max_init)
  470. return s
  471. def construct(self, x):
  472. if self.mode == "LEARNED_SCALE":
  473. if self.training:
  474. out = self.fake_quant_train(x, self.maxq, self.quant_max)
  475. if not self.neg_trunc:
  476. self.minq = self.Neg(self.maxq)
  477. else:
  478. out = self.fake_quant_infer(x, self.maxq, self.quant_max)
  479. else:
  480. if self.training:
  481. min_up, max_up = self.ema_update(x, self.minq, self.maxq)
  482. self.minq = min_up
  483. self.maxq = max_up
  484. out = self.fake_quant_train(x, self.minq, self.maxq)
  485. else:
  486. out = self.fake_quant_infer(x, self.minq, self.maxq)
  487. return out
  488. QuantConfig = namedtuple("QuantConfig", ['weight', 'activation'])
  489. quant_config_default = QuantConfig(weight=FakeQuantWithMinMaxObserver.partial_init(),
  490. activation=FakeQuantWithMinMaxObserver.partial_init())
  491. class Conv2dBnFoldQuantOneConv(Cell):
  492. r"""
  493. 2D convolution which use the convolution layer statistics once to calculate Batch Normalization
  494. operation folded construct.
  495. This part is a more detailed overview of Conv2d operation. For more details about Quantization,
  496. please refer to the implementation of class of `FakeQuantWithMinMaxObserver`,
  497. :class:`FakeQuantWithMinMaxObserver`.
  498. .. math::
  499. w_{q}=quant(\frac{w}{\sqrt{var_{G}+\epsilon}}*\gamma )
  500. b=\frac{-\mu _{G} }{\sqrt{var_{G}+\epsilon }}*\gamma +\beta
  501. y=w_{q}\times x+b
  502. where :math:`quant` is the continuous execution of quant and dequant, you can refer to the implementation of
  503. subclass of `FakeQuantWithMinMaxObserver`, :class:`mindspore.nn.FakeQuantWithMinMaxObserver`.
  504. `mu _{G}` and `var_{G}` represent the global mean and variance respectively.
  505. Args:
  506. in_channels (int): The number of input channel :math:`C_{in}`.
  507. out_channels (int): The number of output channel :math:`C_{out}`.
  508. kernel_size (Union[int, tuple[int]]): Specifies the height and width of the 2D convolution window.
  509. stride (Union[int, tuple[int]]): Specifies stride for all spatial dimensions with the same value. Default: 1.
  510. pad_mode (str): Specifies padding mode. The optional values are "same", "valid", "pad". Default: "same".
  511. padding (Union[int, tuple[int]]): Implicit paddings on both sides of the `x`. Default: 0.
  512. dilation (Union[int, tuple[int]]): Specifies the dilation rate to use for dilated convolution. Default: 1.
  513. group (int): Splits filter into groups, `in_ channels` and `out_channels` must be
  514. divisible by the number of groups. Default: 1.
  515. eps (float): Parameters for Batch Normalization. Default: 1e-5.
  516. momentum (float): Parameters for Batch Normalization op. Default: 0.997.
  517. has_bias (bool): Specifies whether the layer uses a bias vector, which is temporarily invalid. Default: False.
  518. weight_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the
  519. convolution kernel. Default: 'normal'.
  520. bias_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the
  521. bias vector. Default: 'zeros'.
  522. beta_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the
  523. beta vector. Default: 'zeros'.
  524. gamma_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the
  525. gamma vector. Default: 'ones'.
  526. mean_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the
  527. mean vector. Default: 'zeros'.
  528. var_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the
  529. variance vector. Default: 'ones'.
  530. fake (bool): Whether Conv2dBnFoldQuant Cell adds FakeQuantWithMinMaxObserver. Default: True.
  531. quant_config (QuantConfig): Configures the types of quant observer and quant settings of weight and
  532. activation. Note that, QuantConfig is a special namedtuple, which is designed for quantization
  533. and can be generated by :func:`mindspore.compression.quant.create_quant_config` method.
  534. Default: QuantConfig with both items set to default :class:`FakeQuantWithMinMaxObserver`.
  535. quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
  536. Inputs:
  537. - **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
  538. Outputs:
  539. Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
  540. Raises:
  541. TypeError: If `in_channels`, `out_channels` or `group` is not an int.
  542. TypeError: If `kernel_size`, `stride`, `padding` or `dilation` is neither an int nor a tuple.
  543. TypeError: If `has_bias` or `fake` is not a bool.
  544. TypeError: If `data_format` is not a string.
  545. ValueError: If `in_channels`, `out_channels`, `kernel_size`, `stride` or `dilation` is less than 1.
  546. ValueError: If `padding` is less than 0.
  547. ValueError: If `pad_mode` is not one of 'same', 'valid', 'pad'.
  548. Supported Platforms:
  549. ``Ascend`` ``GPU``
  550. Examples:
  551. >>> import mindspore
  552. >>> from mindspore.compression import quant
  553. >>> from mindspore import Tensor
  554. >>> qconfig = quant.create_quant_config()
  555. >>> conv2d_bnfold = nn.Conv2dBnFoldQuantOneConv(1, 1, kernel_size=(2, 2), stride=(1, 1), pad_mode="valid",
  556. ... weight_init="ones", quant_config=qconfig)
  557. >>> x = Tensor(np.array([[[[1, 0, 3], [1, 4, 7], [2, 5, 2]]]]), mindspore.float32)
  558. >>> result = conv2d_bnfold(x)
  559. >>> print(result)
  560. [[[[5.9296875 13.8359375]
  561. [11.859375 17.78125]]]]
  562. """
  563. def __init__(self,
  564. in_channels,
  565. out_channels,
  566. kernel_size,
  567. stride=1,
  568. pad_mode='same',
  569. padding=0,
  570. dilation=1,
  571. group=1,
  572. eps=1e-5,
  573. momentum=0.997,
  574. has_bias=False,
  575. weight_init='normal',
  576. bias_init='zeros',
  577. beta_init='zeros',
  578. gamma_init='ones',
  579. mean_init='zeros',
  580. var_init='ones',
  581. fake=True,
  582. quant_config=quant_config_default,
  583. quant_dtype=QuantDtype.INT8):
  584. """Initialize Conv2dBnFoldQuant layer"""
  585. super(Conv2dBnFoldQuantOneConv, self).__init__()
  586. self.in_channels = Validator.check_positive_int(in_channels, "in_channels", self.cls_name)
  587. self.out_channels = Validator.check_positive_int(out_channels, "out_channels", self.cls_name)
  588. self.kernel_size = twice(kernel_size)
  589. self.stride = twice(stride)
  590. self.dilation = twice(dilation)
  591. for kernel_size_elem in self.kernel_size:
  592. Validator.check_positive_int(kernel_size_elem, 'kernel_size item', self.cls_name)
  593. for stride_elem in self.stride:
  594. Validator.check_positive_int(stride_elem, 'stride item', self.cls_name)
  595. for dilation_elem in self.dilation:
  596. Validator.check_positive_int(dilation_elem, 'dilation item', self.cls_name)
  597. if pad_mode not in ('valid', 'same', 'pad'):
  598. raise ValueError(f"For '{self.cls_name}', the 'pad_mode' should be one of values "
  599. f"in ('valid', 'same', 'pad'), but got {pad_mode}.")
  600. self.pad_mode = pad_mode
  601. if isinstance(padding, int):
  602. Validator.check_non_negative_int(padding, 'padding', self.cls_name)
  603. self.padding = padding
  604. elif isinstance(padding, tuple):
  605. for pad in padding:
  606. Validator.check_non_negative_int(pad, 'padding item', self.cls_name)
  607. self.padding = padding
  608. else:
  609. raise TypeError(f"For '{self.cls_name}', the type of 'padding' must be int/tuple(int), but got "
  610. f"{type(padding).__name__}!")
  611. self.group = Validator.check_positive_int(group, "group", self.cls_name)
  612. self.eps = eps
  613. self.momentum = 1 - momentum
  614. self.has_bias = has_bias
  615. self.fake = Validator.check_bool(fake, "fake", self.cls_name)
  616. self.quant_config = quant_config
  617. self.quant_dtype = quant_dtype
  618. data_format = 'NCHW'
  619. self.format = Validator.check_string(data_format, ['NCHW', 'NHWC'], 'format', self.cls_name)
  620. self._target = context.get_context("device_target")
  621. self.is_graph_mode = context.get_context("mode") == context.GRAPH_MODE
  622. self.is_ge_backend = False
  623. if context.get_context("enable_ge"):
  624. self.is_ge_backend = True
  625. self.enable_default_train = self.is_graph_mode and \
  626. (self.is_ge_backend or self._target == "Ascend")
  627. # initialize convolution op and Parameter
  628. self.conv = P.Conv2D(out_channel=out_channels,
  629. kernel_size=self.kernel_size,
  630. pad_mode=pad_mode,
  631. pad=padding,
  632. stride=self.stride,
  633. dilation=self.dilation,
  634. group=group)
  635. weight_shape = [out_channels, in_channels // group, *self.kernel_size]
  636. channel_axis = 0
  637. self.channel_axis = channel_axis
  638. self.weight = Parameter(initializer(weight_init, weight_shape), name='weight')
  639. self.bias_add = P.BiasAdd()
  640. self.bias = None
  641. if Validator.check_bool(has_bias, "has_bias", self.cls_name):
  642. self.bias = Parameter(initializer(bias_init, [out_channels]), name='bias')
  643. # initialize BatchNorm Parameter
  644. self.gamma = Parameter(initializer(gamma_init, [out_channels]), name='gamma')
  645. self.beta = Parameter(initializer(beta_init, [out_channels]), name='beta')
  646. self.moving_mean = Parameter(initializer(mean_init, [out_channels]), name='moving_mean', requires_grad=False)
  647. self.moving_variance = Parameter(initializer(var_init, [out_channels]), name='moving_variance',
  648. requires_grad=False)
  649. # initialize fake ops
  650. self.fake_quant_weight = quant_config.weight(ema=False,
  651. channel_axis=channel_axis,
  652. num_channels=out_channels,
  653. quant_dtype=quant_dtype)
  654. self.freeze_bn = False
  655. if self.fake_quant_weight.mode == "LEARNED_SCALE":
  656. self.freeze_bn = True
  657. self.bn_train = P.BatchNorm(is_training=True, epsilon=self.eps,
  658. momentum=self.momentum, data_format=self.format)
  659. self.bn_infer = P.BatchNorm(is_training=False, epsilon=self.eps, data_format=self.format)
  660. self.sub_mean = P.Sub()
  661. self.sub_var = P.Sub()
  662. self.mul_mean = P.Mul()
  663. self.mul_var = P.Mul()
  664. self.assign_sub_mean = P.AssignSub()
  665. self.assign_sub_var = P.AssignSub()
  666. self.reshape = P.Reshape()
  667. def extend_repr(self):
  668. """Display instance object as string."""
  669. s = 'in_channels={}, out_channels={}, kernel_size={}, stride={}, ' \
  670. 'pad_mode={}, padding={}, dilation={}, group={}, ' \
  671. 'fake={}, momentum={}, quant_delay={}'.format(self.in_channels, self.out_channels,
  672. self.kernel_size, self.stride,
  673. self.pad_mode, self.padding, self.dilation,
  674. self.group,
  675. self.fake, self.momentum,
  676. self.fake_quant_weight.quant_delay)
  677. return s
  678. def construct(self, x):
  679. running_std = P.Sqrt()(P.Add()(self.moving_variance, self.eps))
  680. scale_factor = self.gamma / running_std
  681. if self.channel_axis:
  682. scale_factor = self.reshape(scale_factor, (1, -1, 1, 1))
  683. else:
  684. scale_factor = self.reshape(scale_factor, (-1, 1, 1, 1))
  685. weight = self.weight * scale_factor
  686. if self.fake:
  687. weight = self.fake_quant_weight(weight)
  688. conv = self.conv(x, weight)
  689. if self.freeze_bn:
  690. return conv + self.reshape((self.beta - self.gamma * self.moving_mean / running_std), (1, -1, 1, 1))
  691. scale_factor = self.reshape(scale_factor, (1, -1, 1, 1))
  692. if self.enable_default_train:
  693. scale_factor = P.Reciprocal()(scale_factor)
  694. conv_orig = conv * scale_factor
  695. else:
  696. conv_orig = conv / scale_factor
  697. if self.training:
  698. return self.bn_train(conv_orig,
  699. self.gamma,
  700. self.beta,
  701. self.moving_mean,
  702. self.moving_variance)[0]
  703. return self.bn_infer(conv_orig,
  704. self.gamma,
  705. self.beta,
  706. self.moving_mean,
  707. self.moving_variance)[0]
  708. class Conv2dBnFoldQuant(Cell):
  709. r"""
  710. 2D convolution with Batch Normalization operation folded construct.
  711. This part is a more detailed overview of Conv2d operation. For more details about Quantization,
  712. please refer to the implementation of class of `FakeQuantWithMinMaxObserver`,
  713. :class:`FakeQuantWithMinMaxObserver`.
  714. .. math::
  715. y = x\times w+ b
  716. w_{q}=quant(\frac{w}{\sqrt{Var[y]+\epsilon}}*\gamma )
  717. y_{out}= w_{q}\times x+\frac{b-E[y]}{\sqrt{Var[y]+\epsilon}}*\gamma +\beta
  718. where :math:`quant` is the continuous execution of quant and dequant. Two convolution
  719. and Batch Normalization operation are used here, the purpose of the first convolution and Batch Normalization
  720. is to count the mean `E[y]` and variance `Var[y]` of current batch output for quantization.
  721. Args:
  722. in_channels (int): The number of input channel :math:`C_{in}`.
  723. out_channels (int): The number of output channel :math:`C_{out}`.
  724. kernel_size (Union[int, tuple[int]]): Specifies the height and width of the 2D convolution window.
  725. stride (Union[int, tuple[int]]): Specifies stride for all spatial dimensions with the same value. Default: 1.
  726. pad_mode (str): Specifies padding mode. The optional values are "same", "valid", "pad". Default: "same".
  727. padding (Union[int, tuple[int]]): Implicit paddings on both sides of the `x`. Default: 0.
  728. dilation (Union[int, tuple[int]]): Specifies the dilation rate to use for dilated convolution. Default: 1.
  729. group (int): Splits filter into groups, `in_ channels` and `out_channels` must be
  730. divisible by the number of groups. Default: 1.
  731. eps (float): Parameters for Batch Normalization. Default: 1e-5.
  732. momentum (float): Parameters for Batch Normalization op. Default: 0.997.
  733. has_bias (bool): Specifies whether the layer uses a bias vector. Default: False.
  734. weight_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the
  735. convolution kernel. Default: 'normal'.
  736. bias_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the
  737. bias vector. Default: 'zeros'.
  738. beta_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the
  739. beta vector. Default: 'zeros'.
  740. gamma_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the
  741. gamma vector. Default: 'ones'.
  742. mean_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the
  743. mean vector. Default: 'zeros'.
  744. var_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the
  745. variance vector. Default: 'ones'.
  746. fake (bool): Whether Conv2dBnFoldQuant Cell adds FakeQuantWithMinMaxObserver. Default: True.
  747. quant_config (QuantConfig): Configures the types of quant observer and quant settings of weight and
  748. activation. Note that, QuantConfig is a special namedtuple, which is designed for quantization
  749. and can be generated by :func:`mindspore.compression.quant.create_quant_config` method.
  750. Default: QuantConfig with both items set to default :class:`FakeQuantWithMinMaxObserver`.
  751. quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
  752. freeze_bn (int): The quantization freeze Batch Normalization op is according to the global step.
  753. Default: 100000.
  754. Inputs:
  755. - **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
  756. Outputs:
  757. Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
  758. Raises:
  759. TypeError: If `in_channels`, `out_channels` or `group` is not an int.
  760. TypeError: If `kernel_size`, `stride`, `padding` or `dilation` is neither an int nor a tuple.
  761. TypeError: If `has_bias` or `fake` is not a bool.
  762. ValueError: If `in_channels`, `out_channels`, `kernel_size`, `stride` or `dilation` is less than 1.
  763. ValueError: If `padding` is less than 0.
  764. ValueError: If `pad_mode` is not one of 'same', 'valid', 'pad'.
  765. ValueError: If `device_target` in context is neither `Ascend` nor `GPU`.
  766. Supported Platforms:
  767. ``Ascend`` ``GPU``
  768. Examples:
  769. >>> import mindspore
  770. >>> from mindspore.compression import quant
  771. >>> from mindspore import Tensor
  772. >>> qconfig = quant.create_quant_config()
  773. >>> conv2d_bnfold = nn.Conv2dBnFoldQuant(1, 1, kernel_size=(2, 2), stride=(1, 1), pad_mode="valid",
  774. ... weight_init="ones", quant_config=qconfig)
  775. >>> x = Tensor(np.array([[[[1, 0, 3], [1, 4, 7], [2, 5, 2]]]]), mindspore.float32)
  776. >>> result = conv2d_bnfold(x)
  777. >>> print(result)
  778. [[[[5.9296875 13.8359375]
  779. [11.859375 17.78125]]]]
  780. """
  781. def __init__(self,
  782. in_channels,
  783. out_channels,
  784. kernel_size,
  785. stride=1,
  786. pad_mode='same',
  787. padding=0,
  788. dilation=1,
  789. group=1,
  790. eps=1e-5,
  791. momentum=0.997,
  792. has_bias=False,
  793. weight_init='normal',
  794. bias_init='zeros',
  795. beta_init='zeros',
  796. gamma_init='ones',
  797. mean_init='zeros',
  798. var_init='ones',
  799. fake=True,
  800. quant_config=quant_config_default,
  801. quant_dtype=QuantDtype.INT8,
  802. freeze_bn=100000):
  803. """Initialize Conv2dBnFoldQuant layer"""
  804. super(Conv2dBnFoldQuant, self).__init__()
  805. self.in_channels = Validator.check_positive_int(in_channels, "in_channels", self.cls_name)
  806. self.out_channels = Validator.check_positive_int(out_channels, "out_channels", self.cls_name)
  807. self.kernel_size = twice(kernel_size)
  808. self.stride = twice(stride)
  809. self.dilation = twice(dilation)
  810. for kernel_size_elem in self.kernel_size:
  811. Validator.check_positive_int(kernel_size_elem, 'kernel_size item', self.cls_name)
  812. for stride_elem in self.stride:
  813. Validator.check_positive_int(stride_elem, 'stride item', self.cls_name)
  814. for dilation_elem in self.dilation:
  815. Validator.check_positive_int(dilation_elem, 'dilation item', self.cls_name)
  816. if pad_mode not in ('valid', 'same', 'pad'):
  817. raise ValueError(f"For '{self.cls_name}', the 'pad_mode' should be one of values in "
  818. f"('valid', 'same', 'pad'), but got {pad_mode}.")
  819. self.pad_mode = pad_mode
  820. if isinstance(padding, int):
  821. Validator.check_non_negative_int(padding, 'padding', self.cls_name)
  822. self.padding = padding
  823. elif isinstance(padding, tuple):
  824. for pad in padding:
  825. Validator.check_non_negative_int(pad, 'padding item', self.cls_name)
  826. self.padding = padding
  827. else:
  828. raise TypeError(f"For '{self.cls_name}', the type of 'padding' must be int/tuple(int), "
  829. f"but got {type(padding).__name__}!")
  830. self.group = Validator.check_positive_int(group, "group", self.cls_name)
  831. self.eps = eps
  832. self.momentum = momentum
  833. self.has_bias = has_bias
  834. self.freeze_bn = freeze_bn
  835. self.fake = Validator.check_bool(fake, "fake", self.cls_name)
  836. self.quant_config = quant_config
  837. self.quant_dtype = quant_dtype
  838. self.is_gpu = context.get_context('device_target') == "GPU"
  839. # initialize convolution op and Parameter
  840. self.conv = P.Conv2D(out_channel=out_channels,
  841. kernel_size=self.kernel_size,
  842. pad_mode=pad_mode,
  843. pad=padding,
  844. stride=self.stride,
  845. dilation=self.dilation,
  846. group=group)
  847. weight_shape = [out_channels, in_channels // group, *self.kernel_size]
  848. channel_axis = 0
  849. self.weight = Parameter(initializer(weight_init, weight_shape), name='weight')
  850. self.bias_add = P.BiasAdd()
  851. self.bias = None
  852. if Validator.check_bool(has_bias, "has_bias", self.cls_name):
  853. self.bias = Parameter(initializer(bias_init, [out_channels]), name='bias')
  854. # initialize BatchNorm Parameter
  855. self.gamma = Parameter(initializer(gamma_init, [out_channels]), name='gamma')
  856. self.beta = Parameter(initializer(beta_init, [out_channels]), name='beta')
  857. self.moving_mean = Parameter(initializer(mean_init, [out_channels]), name='moving_mean', requires_grad=False)
  858. self.moving_variance = Parameter(initializer(var_init, [out_channels]), name='moving_variance',
  859. requires_grad=False)
  860. # initialize fake ops
  861. self.fake_quant_weight = quant_config.weight(ema=False,
  862. channel_axis=channel_axis,
  863. num_channels=out_channels,
  864. quant_dtype=quant_dtype)
  865. self.batchnorm_fold = BatchNormFoldCell(epsilon=eps, momentum=momentum, freeze_bn=freeze_bn)
  866. self.correct_mul = Q.CorrectionMul(channel_axis)
  867. if context.get_context('device_target') == "Ascend":
  868. self.batchnorm_fold2_train = Q.BatchNormFold2D(freeze_bn=freeze_bn)
  869. self.batchnorm_fold2_infer = Q.BatchNormFold2D(freeze_bn=0)
  870. elif context.get_context('device_target') == "GPU":
  871. self.batchnorm_fold2_train = Q.BatchNormFold2(freeze_bn=freeze_bn)
  872. self.batchnorm_fold2_infer = Q.BatchNormFold2(freeze_bn=0)
  873. else:
  874. raise ValueError(f"For '{self.cls_name}', only the 'Ascend' and 'GPU' platforms"
  875. f" are supported, but got {context.get_context('device_target')}.")
  876. self.step = Parameter(initializer('normal', [1], dtype=mstype.int32), name='step', requires_grad=False)
  877. self.one = Tensor(1, mstype.int32)
  878. self.assignadd = P.AssignAdd()
  879. def extend_repr(self):
  880. """Display instance object as string."""
  881. s = 'in_channels={}, out_channels={}, kernel_size={}, stride={}, ' \
  882. 'pad_mode={}, padding={}, dilation={}, group={}, ' \
  883. 'fake={}, freeze_bn={}, momentum={}, quant_delay={}'.format(self.in_channels, self.out_channels,
  884. self.kernel_size, self.stride,
  885. self.pad_mode, self.padding, self.dilation,
  886. self.group,
  887. self.fake, self.freeze_bn, self.momentum,
  888. self.fake_quant_weight.quant_delay)
  889. return s
  890. def construct(self, x):
  891. out_conv = self.conv(x, self.weight)
  892. if self.has_bias:
  893. out_conv = self.bias_add(out_conv, self.bias)
  894. # BN fold1
  895. batch_mean, batch_std, running_mean, running_std = self.batchnorm_fold(out_conv,
  896. self.moving_mean,
  897. self.moving_variance,
  898. self.step)
  899. # fake weight
  900. weight = self.correct_mul(self.weight, self.gamma, running_std)
  901. if self.fake:
  902. weight = self.fake_quant_weight(weight)
  903. out = self.conv(x, weight)
  904. if self.has_bias:
  905. out = self.bias_add(out, self.bias)
  906. # BN fold2
  907. if self.is_gpu:
  908. if self.training:
  909. out = self.batchnorm_fold2_train(out, self.beta, self.gamma,
  910. batch_std, batch_mean, running_std, running_mean, self.step)
  911. self.assignadd(self.step, self.one)
  912. else:
  913. out = self.batchnorm_fold2_infer(out, self.beta, self.gamma,
  914. batch_std, batch_mean, running_std, running_mean, self.step)
  915. else:
  916. if self.training:
  917. out = self.batchnorm_fold2_train(out, self.beta, self.gamma, batch_std, batch_mean, running_std)
  918. self.assignadd(self.step, self.one)
  919. else:
  920. out = self.batchnorm_fold2_infer(out, self.beta, self.gamma, running_std, running_mean, running_std)
  921. return out
  922. class Conv2dBnWithoutFoldQuant(Cell):
  923. r"""
  924. 2D convolution and batchnorm without fold with fake quantized construct.
  925. This part is a more detailed overview of Conv2d operation. For more details about Quantization,
  926. please refer to the implementation of class of `FakeQuantWithMinMaxObserver`,
  927. :class:`mindspore.nn.FakeQuantWithMinMaxObserver`.
  928. .. math::
  929. y =x\times quant(w)+ b
  930. y_{bn} =\frac{y-E[y] }{\sqrt{Var[y]+ \epsilon } } *\gamma + \beta
  931. where :math:`quant` is the continuous execution of quant and dequant, you can refer to the implementation of
  932. class of `FakeQuantWithMinMaxObserver`, :class:`mindspore.nn.FakeQuantWithMinMaxObserver`.
  933. Args:
  934. in_channels (int): The number of input channel :math:`C_{in}`.
  935. out_channels (int): The number of output channel :math:`C_{out}`.
  936. kernel_size (Union[int, tuple[int]]): Specifies the height and width of the 2D convolution window.
  937. stride (Union[int, tuple[int]]): Specifies stride for all spatial dimensions with the same value. Default: 1.
  938. pad_mode (str): Specifies padding mode. The optional values are "same", "valid", "pad". Default: "same".
  939. padding (Union[int, tuple[int]]): Implicit paddings on both sides of the `x`. Default: 0.
  940. dilation (Union[int, tuple[int]]): Specifies the dilation rate to use for dilated convolution. Default: 1.
  941. group (int): Splits filter into groups, `in_ channels` and `out_channels` must be
  942. divisible by the number of groups. Default: 1.
  943. has_bias (bool): Specifies whether the layer uses a bias vector. Default: False.
  944. eps (float): Parameters for Batch Normalization. Default: 1e-5.
  945. momentum (float): Parameters for Batch Normalization op. Default: 0.997.
  946. weight_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the convolution kernel.
  947. Default: 'normal'.
  948. bias_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the bias vector. Default: 'zeros'.
  949. quant_config (QuantConfig): Configures the types of quant observer and quant settings of weight and
  950. activation. Note that, QuantConfig is a special namedtuple, which is designed for quantization
  951. and can be generated by :func:`mindspore.compression.quant.create_quant_config` method.
  952. Default: QuantConfig with both items set to default :class:`FakeQuantWithMinMaxObserver`.
  953. quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
  954. Inputs:
  955. - **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
  956. Outputs:
  957. Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
  958. Supported Platforms:
  959. ``Ascend`` ``GPU``
  960. Raises:
  961. TypeError: If `in_channels`, `out_channels` or `group` is not an int.
  962. TypeError: If `kernel_size`, `stride`, `padding` or `dilation` is neither an int nor a tuple.
  963. TypeError: If `has_bias` is not a bool.
  964. ValueError: If `in_channels`, `out_channels`, `kernel_size`, `stride` or `dilation` is less than 1.
  965. ValueError: If `padding` is less than 0.
  966. ValueError: If `pad_mode` is not one of 'same', 'valid', 'pad'.
  967. Examples:
  968. >>> import mindspore
  969. >>> from mindspore.compression import quant
  970. >>> from mindspore import Tensor
  971. >>> qconfig = quant.create_quant_config()
  972. >>> conv2d_no_bnfold = nn.Conv2dBnWithoutFoldQuant(1, 1, kernel_size=(2, 2), stride=(1, 1), pad_mode="valid",
  973. ... weight_init='ones', quant_config=qconfig)
  974. >>> x = Tensor(np.array([[[[1, 0, 3], [1, 4, 7], [2, 5, 2]]]]), mindspore.float32)
  975. >>> result = conv2d_no_bnfold(x)
  976. >>> print(result)
  977. [[[[5.929658 13.835868]
  978. [11.859316 17.78116]]]]
  979. """
  980. def __init__(self,
  981. in_channels,
  982. out_channels,
  983. kernel_size,
  984. stride=1,
  985. pad_mode='same',
  986. padding=0,
  987. dilation=1,
  988. group=1,
  989. has_bias=False,
  990. eps=1e-5,
  991. momentum=0.997,
  992. weight_init='normal',
  993. bias_init='zeros',
  994. quant_config=quant_config_default,
  995. quant_dtype=QuantDtype.INT8):
  996. """Initialize Conv2dBnWithoutFoldQuant."""
  997. super(Conv2dBnWithoutFoldQuant, self).__init__()
  998. self.in_channels = Validator.check_positive_int(in_channels, "in_channels", self.cls_name)
  999. self.out_channels = Validator.check_positive_int(out_channels, "out_channels", self.cls_name)
  1000. self.has_bias = has_bias
  1001. self.kernel_size = twice(kernel_size)
  1002. self.stride = twice(stride)
  1003. self.dilation = twice(dilation)
  1004. for kernel_size_elem in self.kernel_size:
  1005. Validator.check_positive_int(kernel_size_elem, 'kernel_size item', self.cls_name)
  1006. for stride_elem in self.stride:
  1007. Validator.check_positive_int(stride_elem, 'stride item', self.cls_name)
  1008. for dilation_elem in self.dilation:
  1009. Validator.check_positive_int(dilation_elem, 'dilation item', self.cls_name)
  1010. if pad_mode not in ('valid', 'same', 'pad'):
  1011. raise ValueError(f"For '{self.cls_name}', the 'pad_mode' should be one of values in "
  1012. f"('valid', 'same', 'pad'), but got {pad_mode}.")
  1013. self.pad_mode = pad_mode
  1014. if isinstance(padding, int):
  1015. Validator.check_non_negative_int(padding, 'padding', self.cls_name)
  1016. self.padding = padding
  1017. elif isinstance(padding, tuple):
  1018. for pad in padding:
  1019. Validator.check_non_negative_int(pad, 'padding item', self.cls_name)
  1020. self.padding = padding
  1021. else:
  1022. raise TypeError(f"For '{self.cls_name}', the type of 'padding' must be int/tuple(int), "
  1023. f"but got {type(padding).__name__}!")
  1024. self.group = Validator.check_positive_int(group, "group", self.cls_name)
  1025. self.bias_add = P.BiasAdd()
  1026. if Validator.check_bool(has_bias, "has_bias", self.cls_name):
  1027. self.bias = Parameter(initializer(bias_init, [out_channels]), name='bias')
  1028. else:
  1029. self.bias = None
  1030. # initialize convolution op and Parameter
  1031. self.conv = P.Conv2D(out_channel=self.out_channels,
  1032. kernel_size=self.kernel_size,
  1033. mode=1,
  1034. pad_mode=self.pad_mode,
  1035. pad=self.padding,
  1036. stride=self.stride,
  1037. dilation=self.dilation,
  1038. group=self.group)
  1039. weight_shape = [out_channels, in_channels // group, *self.kernel_size]
  1040. channel_axis = 0
  1041. self.weight = Parameter(initializer(weight_init, weight_shape), name='weight')
  1042. self.fake_quant_weight = quant_config.weight(ema=False,
  1043. channel_axis=channel_axis,
  1044. num_channels=out_channels,
  1045. quant_dtype=quant_dtype)
  1046. self.batchnorm = BatchNorm2d(out_channels, eps=eps, momentum=momentum)
  1047. def construct(self, x):
  1048. weight = self.fake_quant_weight(self.weight)
  1049. out = self.conv(x, weight)
  1050. if self.has_bias:
  1051. out = self.bias_add(out, self.bias)
  1052. out = self.batchnorm(out)
  1053. return out
  1054. def extend_repr(self):
  1055. """Display instance object as string."""
  1056. s = 'in_channels={}, out_channels={}, kernel_size={}, stride={}, ' \
  1057. 'pad_mode={}, padding={}, dilation={}, group={}, ' \
  1058. 'has_bias={}, quant_delay={}'.format(self.in_channels, self.out_channels, self.kernel_size, self.stride,
  1059. self.pad_mode, self.padding, self.dilation, self.group,
  1060. self.has_bias, self.fake_quant_weight.quant_delay)
  1061. return s
  1062. class Conv2dQuant(Cell):
  1063. r"""
  1064. 2D convolution with fake quantized operation layer.
  1065. This part is a more detailed overview of Conv2d operation. For more details about Quantization,
  1066. please refer to the implementation of class of `FakeQuantWithMinMaxObserver`,
  1067. :class:`mindspore.nn.FakeQuantWithMinMaxObserver`.
  1068. Args:
  1069. in_channels (int): The number of input channel :math:`C_{in}`.
  1070. out_channels (int): The number of output channel :math:`C_{out}`.
  1071. kernel_size (Union[int, tuple[int]]): Specifies the height and width of the 2D convolution window.
  1072. stride (Union[int, tuple[int]]): Specifies stride for all spatial dimensions with the same value. Default: 1.
  1073. pad_mode (str): Specifies padding mode. The optional values are "same", "valid", "pad". Default: "same".
  1074. padding (Union[int, tuple[int]]): Implicit paddings on both sides of the `x`. Default: 0.
  1075. dilation (Union[int, tuple[int]]): Specifies the dilation rate to use for dilated convolution. Default: 1.
  1076. group (int): Splits filter into groups, `in_ channels` and `out_channels` must be
  1077. divisible by the number of groups. Default: 1.
  1078. has_bias (bool): Specifies whether the layer uses a bias vector. Default: False.
  1079. weight_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the convolution kernel.
  1080. Default: 'normal'.
  1081. bias_init (Union[Tensor, str, Initializer, numbers.Number]): Initializer for the bias vector. Default: 'zeros'.
  1082. quant_config (QuantConfig): Configures the types of quant observer and quant settings of weight and
  1083. activation. Note that, QuantConfig is a special namedtuple, which is designed for quantization
  1084. and can be generated by :func:`mindspore.compression.quant.create_quant_config` method.
  1085. Default: QuantConfig with both items set to default :class:`FakeQuantWithMinMaxObserver`.
  1086. quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
  1087. Inputs:
  1088. - **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
  1089. The input dimension is preferably 2D or 4D.
  1090. Outputs:
  1091. Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
  1092. Raises:
  1093. TypeError: If `in_channels`, `out_channels` or `group` is not an int.
  1094. TypeError: If `kernel_size`, `stride`, `padding` or `dilation` is neither an int nor a tuple.
  1095. TypeError: If `has_bias` is not a bool.
  1096. ValueError: If `in_channels`, `out_channels`, `kernel_size`, `stride` or `dilation` is less than 1.
  1097. ValueError: If `padding` is less than 0.
  1098. ValueError: If `pad_mode` is not one of 'same', 'valid', 'pad'.
  1099. Supported Platforms:
  1100. ``Ascend`` ``GPU``
  1101. Examples:
  1102. >>> import mindspore
  1103. >>> from mindspore.compression import quant
  1104. >>> from mindspore import Tensor
  1105. >>> qconfig = quant.create_quant_config()
  1106. >>> conv2d_quant = nn.Conv2dQuant(1, 1, kernel_size=(2, 2), stride=(1, 1), pad_mode="valid",
  1107. ... weight_init='ones', quant_config=qconfig)
  1108. >>> x = Tensor(np.array([[[[1, 0, 3], [1, 4, 7], [2, 5, 2]]]]), mindspore.float32)
  1109. >>> result = conv2d_quant(x)
  1110. >>> print(result)
  1111. [[[[5.9296875 13.8359375]
  1112. [11.859375 17.78125]]]]
  1113. """
  1114. def __init__(self,
  1115. in_channels,
  1116. out_channels,
  1117. kernel_size,
  1118. stride=1,
  1119. pad_mode='same',
  1120. padding=0,
  1121. dilation=1,
  1122. group=1,
  1123. has_bias=False,
  1124. weight_init='normal',
  1125. bias_init='zeros',
  1126. quant_config=quant_config_default,
  1127. quant_dtype=QuantDtype.INT8):
  1128. """Initialize Conv2dQuant."""
  1129. super(Conv2dQuant, self).__init__()
  1130. self.in_channels = Validator.check_positive_int(in_channels, "in_channels", self.cls_name)
  1131. self.out_channels = Validator.check_positive_int(out_channels, "out_channels", self.cls_name)
  1132. self.has_bias = has_bias
  1133. self.kernel_size = twice(kernel_size)
  1134. self.stride = twice(stride)
  1135. self.dilation = twice(dilation)
  1136. for kernel_size_elem in self.kernel_size:
  1137. Validator.check_positive_int(kernel_size_elem, 'kernel_size item', self.cls_name)
  1138. for stride_elem in self.stride:
  1139. Validator.check_positive_int(stride_elem, 'stride item', self.cls_name)
  1140. for dilation_elem in self.dilation:
  1141. Validator.check_positive_int(dilation_elem, 'dilation item', self.cls_name)
  1142. if pad_mode not in ('valid', 'same', 'pad'):
  1143. raise ValueError(f"For '{self.cls_name}', the 'pad_mode' should be one of values "
  1144. f"in ('valid', 'same', 'pad'), but got {pad_mode}.")
  1145. self.pad_mode = pad_mode
  1146. if isinstance(padding, int):
  1147. Validator.check_non_negative_int(padding, 'padding', self.cls_name)
  1148. self.padding = padding
  1149. elif isinstance(padding, tuple):
  1150. for pad in padding:
  1151. Validator.check_non_negative_int(pad, 'padding item', self.cls_name)
  1152. self.padding = padding
  1153. else:
  1154. raise TypeError(f"For '{self.cls_name}', the type of 'padding' must be int/tuple(int), "
  1155. f"but got {type(padding).__name__}!")
  1156. self.group = Validator.check_positive_int(group, "group", self.cls_name)
  1157. weight_shape = [out_channels, in_channels // group, *self.kernel_size]
  1158. self.weight = Parameter(initializer(weight_init, weight_shape), name='weight')
  1159. self.bias_add = P.BiasAdd()
  1160. if Validator.check_bool(has_bias, "has_bias", self.cls_name):
  1161. self.bias = Parameter(initializer(bias_init, [out_channels]), name='bias')
  1162. else:
  1163. self.bias = None
  1164. self.conv = P.Conv2D(out_channel=self.out_channels,
  1165. kernel_size=self.kernel_size,
  1166. mode=1,
  1167. pad_mode=self.pad_mode,
  1168. pad=self.padding,
  1169. stride=self.stride,
  1170. dilation=self.dilation,
  1171. group=self.group)
  1172. channel_axis = 0
  1173. self.fake_quant_weight = quant_config.weight(ema=False,
  1174. channel_axis=channel_axis,
  1175. num_channels=out_channels,
  1176. quant_dtype=quant_dtype)
  1177. def construct(self, x):
  1178. weight = self.fake_quant_weight(self.weight)
  1179. out = self.conv(x, weight)
  1180. if self.has_bias:
  1181. return self.bias_add(out, self.bias)
  1182. return out
  1183. def extend_repr(self):
  1184. """Display instance object as string."""
  1185. s = 'in_channels={}, out_channels={}, kernel_size={}, stride={}, ' \
  1186. 'pad_mode={}, padding={}, dilation={}, group={}, ' \
  1187. 'has_bias={}, quant_delay={}'.format(self.in_channels, self.out_channels, self.kernel_size, self.stride,
  1188. self.pad_mode, self.padding, self.dilation, self.group,
  1189. self.has_bias, self.fake_quant_weight.quant_delay)
  1190. return s
  1191. class DenseQuant(Cell):
  1192. r"""
  1193. The fully connected layer with fake quantized operation.
  1194. This part is a more detailed overview of Dense operation. For more details about Quantization,
  1195. please refer to the implementation of class of `FakeQuantWithMinMaxObserver`,
  1196. :class:`mindspore.nn.FakeQuantWithMinMaxObserver`.
  1197. Args:
  1198. in_channels (int): The dimension of the input space.
  1199. out_channels (int): The dimension of the output space.
  1200. weight_init (Union[Tensor, str, Initializer, numbers.Number]): The trainable weight_init parameter. The dtype
  1201. is same as `x`. The values of str refer to the function `initializer`. Default: 'normal'.
  1202. bias_init (Union[Tensor, str, Initializer, numbers.Number]): The trainable bias_init parameter. The dtype is
  1203. same as `x`. The values of str refer to the function `initializer`. Default: 'zeros'.
  1204. has_bias (bool): Specifies whether the layer uses a bias vector. Default: True.
  1205. activation (Union[str, Cell, Primitive]): The regularization function applied to the output of the layer,
  1206. eg. 'relu'. Default: None.
  1207. quant_config (QuantConfig): Configures the types of quant observer and quant settings of weight and
  1208. activation. Note that, QuantConfig is a special namedtuple, which is designed for quantization
  1209. and can be generated by :func:`mindspore.compression.quant.create_quant_config` method.
  1210. Default: QuantConfig with both items set to default :class:`FakeQuantWithMinMaxObserver`.
  1211. quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
  1212. Inputs:
  1213. - **x** (Tensor) - Tensor of shape :math:`(N, C_{in}, H_{in}, W_{in})`.
  1214. The input dimension is preferably 2D or 4D.
  1215. Outputs:
  1216. Tensor of shape :math:`(N, C_{out}, H_{out}, W_{out})`.
  1217. Raises:
  1218. TypeError: If `in_channels`, `out_channels` is not an int.
  1219. TypeError: If `has_bias` is not a bool.
  1220. TypeError: If `activation` is not str, Cell and Primitive.
  1221. ValueError: If `in_channels` or `out_channels` is less than 1.
  1222. ValueError: If the dims of `weight_init` is not equal to 2 or the first element of `weight_init` is not equal
  1223. to `out_channels` or the second element of `weight_init` is not equal to `in_channels`.
  1224. ValueError: If the dims of `bias_init` is not equal to 1 or the element of `bias_init` is not equal
  1225. to `out_channels`.
  1226. Supported Platforms:
  1227. ``Ascend`` ``GPU``
  1228. Examples:
  1229. >>> import mindspore
  1230. >>> from mindspore.compression import quant
  1231. >>> from mindspore import Tensor
  1232. >>> qconfig = quant.create_quant_config()
  1233. >>> dense_quant = nn.DenseQuant(2, 1, weight_init='ones', quant_config=qconfig)
  1234. >>> x = Tensor(np.array([[1, 5], [3, 4]]), mindspore.float32)
  1235. >>> result = dense_quant(x)
  1236. >>> print(result)
  1237. [[5.929413]
  1238. [6.9176483]]
  1239. """
  1240. def __init__(self,
  1241. in_channels,
  1242. out_channels,
  1243. weight_init='normal',
  1244. bias_init='zeros',
  1245. has_bias=True,
  1246. activation=None,
  1247. quant_config=quant_config_default,
  1248. quant_dtype=QuantDtype.INT8):
  1249. """Initialize DenseQuant."""
  1250. super(DenseQuant, self).__init__()
  1251. self.in_channels = Validator.check_positive_int(in_channels, "in_channels", self.cls_name)
  1252. self.out_channels = Validator.check_positive_int(out_channels, "out_channels", self.cls_name)
  1253. self.has_bias = Validator.check_bool(has_bias, "has_bias", self.cls_name)
  1254. if isinstance(weight_init, Tensor):
  1255. if weight_init.ndim != 2 or weight_init.shape[0] != out_channels or \
  1256. weight_init.shape[1] != in_channels:
  1257. raise ValueError(f"For '{self.cls_name}', weight init shape error. The ndim of 'weight_init' should "
  1258. f"be equal to 2, and the first dim should be equal to 'out_channels', and the "
  1259. f"second dim should be equal to 'in_channels'. But got 'weight_init': {weight_init}, "
  1260. f"'out_channels': {out_channels}, 'in_channels': {in_channels}.")
  1261. self.weight = Parameter(initializer(
  1262. weight_init, [out_channels, in_channels]), name="weight")
  1263. if self.has_bias:
  1264. if isinstance(bias_init, Tensor):
  1265. if bias_init.ndim != 1 or bias_init.shape[0] != out_channels:
  1266. raise ValueError(f"For '{self.cls_name}', bias init shape error. The ndim of 'bias_init' should "
  1267. f"be equal to 1, and the first dim should be equal to 'out_channels'. But got "
  1268. f"'bias_init': {bias_init}, 'out_channels': {out_channels}.")
  1269. self.bias = Parameter(initializer(
  1270. bias_init, [out_channels]), name="bias")
  1271. self.matmul = P.MatMul(transpose_b=True)
  1272. self.bias_add = P.BiasAdd()
  1273. self.activation = get_activation(activation) if isinstance(activation, str) else activation
  1274. if activation is not None and not isinstance(self.activation, (Cell, Primitive)):
  1275. raise TypeError(f"For '{self.cls_name}', the 'activation' must be str or Cell or Primitive, "
  1276. f"but got {activation}.")
  1277. self.activation_flag = self.activation is not None
  1278. self.fake_quant_weight = quant_config.weight(ema=False,
  1279. channel_axis=0,
  1280. num_channels=out_channels,
  1281. quant_dtype=quant_dtype)
  1282. def construct(self, x):
  1283. """Use operators to construct the Dense layer.
  1284. Args:
  1285. x (Tensor): Input tensor.
  1286. """
  1287. output = self.fake_quant_weight(self.weight)
  1288. output = self.matmul(x, output)
  1289. if self.has_bias:
  1290. output = self.bias_add(output, self.bias)
  1291. if self.activation_flag:
  1292. return self.activation(output)
  1293. return output
  1294. def extend_repr(self):
  1295. """A pretty print for Dense layer."""
  1296. s = 'in_channels={}, out_channels={}, weight={}, has_bias={}'.format(
  1297. self.in_channels, self.out_channels, self.weight, self.has_bias)
  1298. if self.has_bias:
  1299. s += ', bias={}'.format(self.bias)
  1300. if self.activation_flag:
  1301. s += ', activation={}'.format(self.activation)
  1302. return s
  1303. class _QuantActivation(Cell):
  1304. r"""
  1305. Base class for quantization aware training activation function. Adds fake quantized operation
  1306. after activation operation.
  1307. """
  1308. def get_origin(self):
  1309. raise NotImplementedError
  1310. class ActQuant(_QuantActivation):
  1311. r"""
  1312. Quantization aware training activation function.
  1313. Add the fake quantized operation to the end of activation operation, by which the output of activation
  1314. operation will be truncated. For more details about Quantization, please refer to the implementation
  1315. of subclass of `FakeQuantWithMinMaxObserver`, :class:`mindspore.nn.FakeQuantWithMinMaxObserver`.
  1316. Args:
  1317. activation (Cell): Activation cell.
  1318. ema (bool): The exponential Moving Average algorithm updates min and max. Default: False.
  1319. ema_decay (float): Exponential Moving Average algorithm parameter. Default: 0.999.
  1320. fake_before (bool): Whether add fake quantized operation before activation. Default: False.
  1321. quant_config (QuantConfig): Configures the types of quant observer and quant settings of weight and
  1322. activation. Note that, QuantConfig is a special namedtuple, which is designed for quantization
  1323. and can be generated by :func:`mindspore.compression.quant.create_quant_config` method.
  1324. Default: QuantConfig with both items set to default :class:`FakeQuantWithMinMaxObserver`.
  1325. quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
  1326. Inputs:
  1327. - **x** (Tensor) - The input of ActQuant. The input dimension is preferably 2D or 4D.
  1328. Outputs:
  1329. Tensor, with the same type and shape as the `x`.
  1330. Raises:
  1331. TypeError: If `activation` is not an instance of Cell.
  1332. TypeError: If `fake_before` is not a bool.
  1333. Supported Platforms:
  1334. ``Ascend`` ``GPU``
  1335. Examples:
  1336. >>> import mindspore
  1337. >>> from mindspore.compression import quant
  1338. >>> from mindspore import Tensor
  1339. >>> qconfig = quant.create_quant_config()
  1340. >>> act_quant = nn.ActQuant(nn.ReLU(), quant_config=qconfig)
  1341. >>> x = Tensor(np.array([[1, 2, -1], [-2, 0, -1]]), mindspore.float32)
  1342. >>> result = act_quant(x)
  1343. >>> print(result)
  1344. [[0.9882355 1.9764705 0. ]
  1345. [0. 0. 0. ]]
  1346. """
  1347. def __init__(self,
  1348. activation,
  1349. ema=False,
  1350. ema_decay=0.999,
  1351. fake_before=False,
  1352. quant_config=quant_config_default,
  1353. quant_dtype=QuantDtype.INT8):
  1354. """Initialize ActQuant."""
  1355. super(ActQuant, self).__init__()
  1356. act_class = activation.__class__
  1357. act_list = [nn.ReLU, nn.ReLU6]
  1358. self.act = Validator.check_isinstance("activation", activation, Cell)
  1359. self.fake_before = Validator.check_bool(fake_before, "fake_before", self.cls_name)
  1360. if self.fake_before:
  1361. self.fake_quant_act_before = quant_config.activation(min_init=-6,
  1362. max_init=6,
  1363. ema=ema,
  1364. ema_decay=ema_decay,
  1365. quant_dtype=quant_dtype)
  1366. self.neg_trunc = False
  1367. self.narrow_range = False
  1368. preset_dict = quant_config.activation.p.keywords
  1369. if 'mode' in preset_dict and preset_dict['mode'] == "LEARNED_SCALE" and act_class in act_list:
  1370. self.neg_trunc = True
  1371. elif 'narrow_range' in preset_dict:
  1372. self.narrow_range = preset_dict['narrow_range']
  1373. self.fake_quant_act = quant_config.activation(min_init=-6,
  1374. max_init=6,
  1375. ema=ema,
  1376. ema_decay=ema_decay,
  1377. quant_dtype=quant_dtype,
  1378. neg_trunc=self.neg_trunc,
  1379. narrow_range=self.narrow_range)
  1380. def construct(self, x):
  1381. if self.fake_before:
  1382. x = self.fake_quant_act_before(x)
  1383. x = self.act(x)
  1384. x = self.fake_quant_act(x)
  1385. return x
  1386. def get_origin(self):
  1387. return self.act
  1388. class TensorAddQuant(Cell):
  1389. r"""
  1390. Adds fake quantized operation after TensorAdd operation.
  1391. This part is a more detailed overview of TensorAdd operation. For more details about Quantization,
  1392. please refer to the implementation of class of `FakeQuantWithMinMaxObserver`,
  1393. :class:`mindspore.nn.FakeQuantWithMinMaxObserver`.
  1394. Args:
  1395. ema_decay (float): Exponential Moving Average algorithm parameter. Default: 0.999.
  1396. quant_config (QuantConfig): Configures the types of quant observer and quant settings of weight and
  1397. activation. Note that, QuantConfig is a special namedtuple, which is designed for quantization
  1398. and can be generated by :func:`mindspore.compression.quant.create_quant_config` method.
  1399. Default: QuantConfig with both items set to default :class:`FakeQuantWithMinMaxObserver`.
  1400. quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
  1401. Inputs:
  1402. - **x1** (Tensor) - The first tensor of TensorAddQuant. The input dimension is preferably 2D or 4D.
  1403. - **x2** (Tensor) - The second tensor of TensorAddQuant. Has the same shape with `x1`.
  1404. Outputs:
  1405. Tensor, with the same type and shape as the `x1`.
  1406. Raises:
  1407. TypeError: If `ema_decay` is not a float.
  1408. ValueError: If the shape of `x2` is different with `x1`.
  1409. Supported Platforms:
  1410. ``Ascend`` ``GPU``
  1411. Examples:
  1412. >>> import mindspore
  1413. >>> from mindspore.compression import quant
  1414. >>> from mindspore import Tensor
  1415. >>> qconfig = quant.create_quant_config()
  1416. >>> add_quant = nn.TensorAddQuant(quant_config=qconfig)
  1417. >>> x1 = Tensor(np.array([[1, 2, 1], [-2, 0, -1]]), mindspore.float32)
  1418. >>> x2 = Tensor(np.ones((2, 3)), mindspore.float32)
  1419. >>> output = add_quant(x1, x2)
  1420. >>> print(output)
  1421. [[ 1.9764705 3.011765 1.9764705]
  1422. [-0.9882355 0.9882355 0. ]]
  1423. """
  1424. def __init__(self,
  1425. ema_decay=0.999,
  1426. quant_config=quant_config_default,
  1427. quant_dtype=QuantDtype.INT8):
  1428. """Initialize TensorAddQuant."""
  1429. super(TensorAddQuant, self).__init__()
  1430. self.fake_quant_act = quant_config.activation(min_init=-6,
  1431. max_init=6,
  1432. ema=True,
  1433. ema_decay=ema_decay,
  1434. quant_dtype=quant_dtype)
  1435. self.add = P.Add()
  1436. def construct(self, x1, x2):
  1437. x = self.add(x1, x2)
  1438. x = self.fake_quant_act(x)
  1439. return x
  1440. class MulQuant(Cell):
  1441. r"""
  1442. Adds fake quantized operation after `Mul` operation.
  1443. This part is a more detailed overview of `Mul` operation. For more details about Quantization,
  1444. please refer to the implementation of class of `FakeQuantWithMinMaxObserver`,
  1445. :class:`mindspore.nn.FakeQuantWithMinMaxObserver`.
  1446. Args:
  1447. ema_decay (float): Exponential Moving Average algorithm parameter. Default: 0.999.
  1448. quant_config (QuantConfig): Configures the types of quant observer and quant settings of weight and
  1449. activation. Note that, QuantConfig is a special namedtuple, which is designed for quantization
  1450. and can be generated by :func:`mindspore.compression.quant.create_quant_config` method.
  1451. Default: QuantConfig with both items set to default :class:`FakeQuantWithMinMaxObserver`.
  1452. quant_dtype (QuantDtype): Specifies the FakeQuant datatype. Default: QuantDtype.INT8.
  1453. Inputs:
  1454. - **x1** (Tensor) - The first tensor of MulQuant. The input dimension is preferably 2D or 4D.
  1455. - **x2** (Tensor) - The second tensor of MulQuant. Has the same shape with `x1`.
  1456. Outputs:
  1457. Tensor, with the same type and shape as the `x1`.
  1458. Raises:
  1459. TypeError: If `ema_decay` is not a float.
  1460. ValueError: If the shape of `x2` is different with `x1`.
  1461. Supported Platforms:
  1462. ``Ascend`` ``GPU``
  1463. Examples:
  1464. >>> import mindspore
  1465. >>> from mindspore.compression import quant
  1466. >>> from mindspore import Tensor
  1467. >>> qconfig = quant.create_quant_config()
  1468. >>> mul_quant = nn.MulQuant(quant_config=qconfig)
  1469. >>> x1 = Tensor(np.array([[1, 2, 1], [-2, 0, -1]]), mindspore.float32)
  1470. >>> x2 = Tensor(np.ones((2, 3)) * 2, mindspore.float32)
  1471. >>> output = mul_quant(x1, x2)
  1472. >>> print(output)
  1473. [[ 1.9764705 4.0000005 1.9764705]
  1474. [-4. 0. -1.9764705]]
  1475. """
  1476. def __init__(self,
  1477. ema_decay=0.999,
  1478. quant_config=quant_config_default,
  1479. quant_dtype=QuantDtype.INT8):
  1480. """Initialize MulQuant."""
  1481. super(MulQuant, self).__init__()
  1482. self.fake_quant_act = quant_config.activation(min_init=-6,
  1483. max_init=6,
  1484. ema=True,
  1485. ema_decay=ema_decay,
  1486. quant_dtype=quant_dtype)
  1487. self.mul = P.Mul()
  1488. def construct(self, x1, x2):
  1489. x = self.mul(x1, x2)
  1490. x = self.fake_quant_act(x)
  1491. return x