You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

logic_ops.py 31 kB

5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
4 years ago
4 years ago
5 years ago
5 years ago
5 years ago
optimize the comment and log description 修改: ops/operations/_inner_ops.py 修改: ops/operations/_quant_ops.py 修改: ops/operations/array_ops.py 修改: ops/operations/comm_ops.py 修改: ops/operations/math_ops.py 修改: ops/operations/quantum_ops.py 修改: ops/operations/rl_ops.py 修改: ops/operations/sponge_ops.py 修改: ops/operations/sponge_update_ops.py 修改: train/__init__.py 修改: common/tensor.py 修改: train/serialization.py 修改: ccsrc/pipeline/jit/parse/parse.h 修改: explainer/benchmark/_attribution/metric.py 修改: ops/composite/multitype_ops/_constexpr_utils.py 修改: ops/operations/comm_ops.py 修改: RELEASE.md 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/concat_offset_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/dynamic_shape_cpu_kernel.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/reshape_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/tile_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/transpose_info.cc 修改: mindspore/ccsrc/frontend/parallel/strategy.h 修改: mindspore/common/tensor.py 修改: mindspore/core/abstract/prim_arrays.cc 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/core/ops/logical_and.h 修改: mindspore/core/ops/logical_not.h 修改: mindspore/core/ops/logical_or.h 修改: mindspore/core/ops/reduce_all.h 修改: mindspore/core/ops/reduce_any.h 修改: mindspore/lite/src/runtime/kernel/arm/fp32_grad/sgd.cc 修改: mindspore/nn/layer/quant.py 修改: mindspore/nn/optim/sgd.py 修改: mindspore/nn/sparse/sparse.py 修改: mindspore/numpy/array_creations.py 修改: mindspore/numpy/array_ops.py 修改: mindspore/numpy/logic_ops.py 修改: mindspore/numpy/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/train/_utils.py 修改: tests/ut/python/model/test_lenet_core_after_exception.py 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ctcloss_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_pull_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_push_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_filter_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_input_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_ftrl_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_lazy_adam_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/rolling_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/scatter_arithmetic_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/arrays/split_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/math/broadcast_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/nn/conv2d_grad_input_gpu_kernel.h 修改: mindspore/ccsrc/fl/server/server.cc 修改: mindspore/ccsrc/frontend/optimizer/ad/kpynative.cc 修改: mindspore/ccsrc/frontend/optimizer/irpass/incorporate_getitem.h 修改: mindspore/ccsrc/frontend/optimizer/irpass/inline.h 修改: mindspore/ccsrc/minddata/dataset/core/device_tensor.cc 修改: mindspore/ccsrc/minddata/dataset/core/tensor.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/emnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/mnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/qmnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/ir/datasetops/dataset_node.cc 修改: mindspore/ccsrc/minddata/dataset/engine/opt/pre/epoch_ctrl_pass.cc 修改: mindspore/ccsrc/minddata/dataset/kernels/image/lite_image_utils.cc 修改: mindspore/ccsrc/pipeline/jit/action.cc 修改: mindspore/ccsrc/pipeline/jit/static_analysis/evaluator.cc 修改: mindspore/ccsrc/runtime/device/ascend/executor/tiling/op_tiling_adapter.cc 修改: mindspore/compression/quant/quant_utils.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/dataset/engine/validators.py 修改: mindspore/lite/micro/coder/opcoders/nnacl/fp32/affine_fp32_coder.cc 修改: mindspore/lite/micro/coder/opcoders/nnacl/int8/affine_int8_coder.cc 修改: mindspore/lite/src/runtime/kernel/ascend310/src/custom_kernel.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/matmul.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/strassen.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/lite/tools/optimizer/fisson/fisson_util.cc 修改: mindspore/ops/composite/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/math_ops.py 修改: mindspore/ops/operations/other_ops.py 修改: mindspore/boost/boost_cell_wrapper.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/common/trans.cc 修改: mindspore/ccsrc/frontend/parallel/cache_embedding/cache_embedding.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/lite/src/common/log_util.h 修改: mindspore/nn/wrap/loss_scale.py 修改: mindspore/parallel/nn/moe.py 修改: tests/mindspore_test_framework/mindspore_test.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/core/ops/conv2d.cc 修改: tests/ut/python/model/test_lenet_core_after_exception.py
4 years ago
4 years ago
optimize the comment and log description 修改: ops/operations/_inner_ops.py 修改: ops/operations/_quant_ops.py 修改: ops/operations/array_ops.py 修改: ops/operations/comm_ops.py 修改: ops/operations/math_ops.py 修改: ops/operations/quantum_ops.py 修改: ops/operations/rl_ops.py 修改: ops/operations/sponge_ops.py 修改: ops/operations/sponge_update_ops.py 修改: train/__init__.py 修改: common/tensor.py 修改: train/serialization.py 修改: ccsrc/pipeline/jit/parse/parse.h 修改: explainer/benchmark/_attribution/metric.py 修改: ops/composite/multitype_ops/_constexpr_utils.py 修改: ops/operations/comm_ops.py 修改: RELEASE.md 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/concat_offset_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/dynamic_shape_cpu_kernel.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/reshape_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/tile_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/transpose_info.cc 修改: mindspore/ccsrc/frontend/parallel/strategy.h 修改: mindspore/common/tensor.py 修改: mindspore/core/abstract/prim_arrays.cc 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/core/ops/logical_and.h 修改: mindspore/core/ops/logical_not.h 修改: mindspore/core/ops/logical_or.h 修改: mindspore/core/ops/reduce_all.h 修改: mindspore/core/ops/reduce_any.h 修改: mindspore/lite/src/runtime/kernel/arm/fp32_grad/sgd.cc 修改: mindspore/nn/layer/quant.py 修改: mindspore/nn/optim/sgd.py 修改: mindspore/nn/sparse/sparse.py 修改: mindspore/numpy/array_creations.py 修改: mindspore/numpy/array_ops.py 修改: mindspore/numpy/logic_ops.py 修改: mindspore/numpy/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/train/_utils.py 修改: tests/ut/python/model/test_lenet_core_after_exception.py 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ctcloss_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_pull_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_push_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_filter_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_input_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_ftrl_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_lazy_adam_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/rolling_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/scatter_arithmetic_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/arrays/split_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/math/broadcast_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/nn/conv2d_grad_input_gpu_kernel.h 修改: mindspore/ccsrc/fl/server/server.cc 修改: mindspore/ccsrc/frontend/optimizer/ad/kpynative.cc 修改: mindspore/ccsrc/frontend/optimizer/irpass/incorporate_getitem.h 修改: mindspore/ccsrc/frontend/optimizer/irpass/inline.h 修改: mindspore/ccsrc/minddata/dataset/core/device_tensor.cc 修改: mindspore/ccsrc/minddata/dataset/core/tensor.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/emnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/mnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/qmnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/ir/datasetops/dataset_node.cc 修改: mindspore/ccsrc/minddata/dataset/engine/opt/pre/epoch_ctrl_pass.cc 修改: mindspore/ccsrc/minddata/dataset/kernels/image/lite_image_utils.cc 修改: mindspore/ccsrc/pipeline/jit/action.cc 修改: mindspore/ccsrc/pipeline/jit/static_analysis/evaluator.cc 修改: mindspore/ccsrc/runtime/device/ascend/executor/tiling/op_tiling_adapter.cc 修改: mindspore/compression/quant/quant_utils.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/dataset/engine/validators.py 修改: mindspore/lite/micro/coder/opcoders/nnacl/fp32/affine_fp32_coder.cc 修改: mindspore/lite/micro/coder/opcoders/nnacl/int8/affine_int8_coder.cc 修改: mindspore/lite/src/runtime/kernel/ascend310/src/custom_kernel.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/matmul.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/strassen.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/lite/tools/optimizer/fisson/fisson_util.cc 修改: mindspore/ops/composite/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/math_ops.py 修改: mindspore/ops/operations/other_ops.py 修改: mindspore/boost/boost_cell_wrapper.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/common/trans.cc 修改: mindspore/ccsrc/frontend/parallel/cache_embedding/cache_embedding.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/lite/src/common/log_util.h 修改: mindspore/nn/wrap/loss_scale.py 修改: mindspore/parallel/nn/moe.py 修改: tests/mindspore_test_framework/mindspore_test.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/core/ops/conv2d.cc 修改: tests/ut/python/model/test_lenet_core_after_exception.py
4 years ago
4 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909
  1. # Copyright 2021 Huawei Technologies Co., Ltd
  2. #
  3. # Licensed under the Apache License, Version 2.0 (the "License");
  4. # you may not use this file except in compliance with the License.
  5. # You may obtain a copy of the License at
  6. #
  7. # http://www.apache.org/licenses/LICENSE-2.0
  8. #
  9. # Unless required by applicable law or agreed to in writing, software
  10. # distributed under the License is distributed on an "AS IS" BASIS,
  11. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  12. # See the License for the specific language governing permissions and
  13. # limitations under the License.
  14. # ============================================================================
  15. """logical operations, the function docs are adapted from Numpy API."""
  16. from ..ops import functional as F
  17. from ..common import dtype as mstype
  18. from ..common import Tensor
  19. from .math_ops import _apply_tensor_op, absolute
  20. from .array_creations import zeros, ones, empty, asarray
  21. from .utils import _check_input_tensor, _to_tensor, _isnan
  22. from .utils_const import _raise_type_error, _is_shape_empty, _infer_out_shape, _check_same_type, \
  23. _check_axis_type, _canonicalize_axis, _can_broadcast, _isscalar
  24. def not_equal(x1, x2, dtype=None):
  25. """
  26. Returns (x1 != x2) element-wise.
  27. Note:
  28. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`,
  29. and `extobj` are not supported.
  30. Args:
  31. x1 (Tensor): First input tensor to be compared.
  32. x2 (Tensor): Second input tensor to be compared.
  33. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  34. output Tensor.
  35. Returns:
  36. Tensor or scalar, element-wise comparison of `x1` and `x2`. Typically of type
  37. bool, unless `dtype` is passed. This is a scalar if both `x1` and `x2` are
  38. scalars.
  39. Raises:
  40. TypeError: If the input is not a tensor.
  41. Supported Platforms:
  42. ``Ascend`` ``GPU`` ``CPU``
  43. Examples:
  44. >>> import mindspore.numpy as np
  45. >>> a = np.asarray([1, 2])
  46. >>> b = np.asarray([[1, 3],[1, 4]])
  47. >>> print(np.not_equal(a, b))
  48. [[False True]
  49. [False True]]
  50. """
  51. _check_input_tensor(x1, x2)
  52. return _apply_tensor_op(F.not_equal, x1, x2, dtype=dtype)
  53. def less_equal(x1, x2, dtype=None):
  54. """
  55. Returns the truth value of ``(x1 <= x2)`` element-wise.
  56. Note:
  57. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`,
  58. and `extobj` are not supported.
  59. Args:
  60. x1 (Tensor): Input array.
  61. x2 (Tensor): Input array. If ``x1.shape != x2.shape``, they must be
  62. broadcastable to a common shape (which becomes the shape of the output).
  63. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  64. output Tensor.
  65. Returns:
  66. Tensor or scalar, element-wise comparison of `x1` and `x2`. Typically of type
  67. bool, unless `dtype` is passed. This is a scalar if both `x1` and `x2` are
  68. scalars.
  69. Supported Platforms:
  70. ``Ascend`` ``GPU`` ``CPU``
  71. Examples:
  72. >>> import mindspore.numpy as np
  73. >>> output = np.less_equal(np.array([4, 2, 1]), np.array([2, 2, 2]))
  74. >>> print(output)
  75. [False True True]
  76. """
  77. _check_input_tensor(x1, x2)
  78. return _apply_tensor_op(F.tensor_le, x1, x2, dtype=dtype)
  79. def less(x1, x2, dtype=None):
  80. """
  81. Returns the truth value of ``(x1 < x2)`` element-wise.
  82. Note:
  83. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`,
  84. and `extobj` are not supported.
  85. Args:
  86. x1 (Tensor): input array.
  87. x2 (Tensor): Input array. If ``x1.shape != x2.shape``, they must be
  88. broadcastable to a common shape (which becomes the shape of the output).
  89. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  90. output Tensor.
  91. Returns:
  92. Tensor or scalar, element-wise comparison of `x1` and `x2`. Typically of type
  93. bool, unless `dtype` is passed. This is a scalar if both `x1` and `x2` are
  94. scalars.
  95. Supported Platforms:
  96. ``Ascend`` ``GPU`` ``CPU``
  97. Examples:
  98. >>> import mindspore.numpy as np
  99. >>> output = np.less(np.array([1, 2]), np.array([2, 2]))
  100. >>> print(output)
  101. [ True False]
  102. """
  103. return _apply_tensor_op(F.tensor_lt, x1, x2, dtype=dtype)
  104. def greater_equal(x1, x2, dtype=None):
  105. """
  106. Returns the truth value of ``(x1 >= x2)`` element-wise.
  107. Note:
  108. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`,
  109. and `extobj` are not supported.
  110. Args:
  111. x1 (Tensor): Input array.
  112. x2 (Tensor): Input array. If ``x1.shape != x2.shape``, they must be
  113. broadcastable to a common shape (which becomes the shape of the output).
  114. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  115. output Tensor.
  116. Returns:
  117. Tensor or scalar, element-wise comparison of `x1` and `x2`. Typically of type
  118. bool, unless `dtype` is passed. This is a scalar if both `x1` and `x2` are
  119. scalars.
  120. Supported Platforms:
  121. ``Ascend`` ``GPU`` ``CPU``
  122. Examples:
  123. >>> import mindspore.numpy as np
  124. >>> output = np.greater_equal(np.array([4, 2, 1]), np.array([2, 2, 2]))
  125. >>> print(output)
  126. [ True True False]
  127. """
  128. return _apply_tensor_op(F.tensor_ge, x1, x2, dtype=dtype)
  129. def greater(x1, x2, dtype=None):
  130. """
  131. Returns the truth value of ``(x1 > x2)`` element-wise.
  132. Note:
  133. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`,
  134. and `extobj` are not supported.
  135. Args:
  136. x1 (Tensor): Input array.
  137. x2 (Tensor): Input array. If ``x1.shape != x2.shape``, they must be
  138. broadcastable to a common shape (which becomes the shape of the output).
  139. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  140. output Tensor.
  141. Returns:
  142. Tensor or scalar, element-wise comparison of `x1` and `x2`. Typically of type
  143. bool, unless `dtype` is passed. This is a scalar if both `x1` and `x2` are
  144. scalars.
  145. Supported Platforms:
  146. ``Ascend`` ``GPU`` ``CPU``
  147. Examples:
  148. >>> import mindspore.numpy as np
  149. >>> output = np.greater(np.array([4, 2]), np.array([2, 2]))
  150. >>> print(output)
  151. [ True False]
  152. """
  153. return _apply_tensor_op(F.tensor_gt, x1, x2, dtype=dtype)
  154. def equal(x1, x2, dtype=None):
  155. """
  156. Returns the truth value of ``(x1 == x2)`` element-wise.
  157. Note:
  158. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`,
  159. and `extobj` are not supported.
  160. Args:
  161. x1 (Tensor): Input array.
  162. x2 (Tensor): Input array. If ``x1.shape != x2.shape``, they must be
  163. broadcastable to a common shape (which becomes the shape of the output).
  164. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  165. output Tensor.
  166. Returns:
  167. Tensor or scalar, element-wise comparison of `x1` and `x2`. Typically of type
  168. bool, unless `dtype` is passed. This is a scalar if both `x1` and `x2` are
  169. scalars.
  170. Supported Platforms:
  171. ``Ascend`` ``GPU`` ``CPU``
  172. Examples:
  173. >>> import mindspore.numpy as np
  174. >>> output = np.equal(np.array([0, 1, 3]), np.arange(3))
  175. >>> print(output)
  176. [ True True False]
  177. """
  178. return _apply_tensor_op(F.equal, x1, x2, dtype=dtype)
  179. def isfinite(x, dtype=None):
  180. """
  181. Tests element-wise for finiteness (not infinity or not Not a Number).
  182. The result is returned as a boolean array.
  183. Note:
  184. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`,
  185. and `extobj` are not supported.
  186. On GPU, the supported dtypes are np.float16, and np.float32.
  187. Args:
  188. x (Tensor): Input values.
  189. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  190. output Tensor.
  191. Returns:
  192. Tensor or scalar, true where `x` is not positive infinity, negative infinity,
  193. or NaN; false otherwise. This is a scalar if `x` is a scalar.
  194. Supported Platforms:
  195. ``Ascend`` ``GPU`` ``CPU``
  196. Examples:
  197. >>> import mindspore.numpy as np
  198. >>> output = np.isfinite(np.array([np.inf, 1., np.nan]).astype('float32'))
  199. >>> print(output)
  200. [False True False]
  201. """
  202. return _apply_tensor_op(F.isfinite, x, dtype=dtype)
  203. def isnan(x, dtype=None):
  204. """
  205. Tests element-wise for NaN and return result as a boolean array.
  206. Note:
  207. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`,
  208. and `extobj` are not supported.
  209. Only np.float32 is currently supported.
  210. Args:
  211. x (Tensor): Input values.
  212. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  213. output Tensor.
  214. Returns:
  215. Tensor or scalar, true where `x` is NaN, false otherwise. This is a scalar if
  216. `x` is a scalar.
  217. Supported Platforms:
  218. ``GPU`` ``CPU``
  219. Examples:
  220. >>> import mindspore.numpy as np
  221. >>> output = np.isnan(np.array(np.nan, np.float32))
  222. >>> print(output)
  223. True
  224. >>> output = np.isnan(np.array(np.inf, np.float32))
  225. >>> print(output)
  226. False
  227. """
  228. return _apply_tensor_op(_isnan, x, dtype=dtype)
  229. def _isinf(x):
  230. """Computes isinf without applying keyword arguments."""
  231. shape = F.shape(x)
  232. zeros_tensor = zeros(shape, mstype.float32)
  233. ones_tensor = ones(shape, mstype.float32)
  234. not_inf = F.isfinite(x)
  235. is_nan = _isnan(x)
  236. res = F.select(not_inf, zeros_tensor, ones_tensor)
  237. res = F.select(is_nan, zeros_tensor, res)
  238. return F.cast(res, mstype.bool_)
  239. def isinf(x, dtype=None):
  240. """
  241. Tests element-wise for positive or negative infinity.
  242. Returns a boolean array of the same shape as `x`, True where ``x == +/-inf``, otherwise False.
  243. Note:
  244. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`,
  245. and `extobj` are not supported.
  246. Only np.float32 is currently supported.
  247. Args:
  248. x (Tensor): Input values.
  249. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  250. output Tensor.
  251. Returns:
  252. Tensor or scalar, true where `x` is positive or negative infinity, false
  253. otherwise. This is a scalar if `x` is a scalar.
  254. Supported Platforms:
  255. ``GPU`` ``CPU``
  256. Examples:
  257. >>> import mindspore.numpy as np
  258. >>> output = np.isinf(np.array(np.inf, np.float32))
  259. >>> print(output)
  260. True
  261. >>> output = np.isinf(np.array([np.inf, -np.inf, 1.0, np.nan], np.float32))
  262. >>> print(output)
  263. [ True True False False]
  264. """
  265. return _apply_tensor_op(_isinf, x, dtype=dtype)
  266. def _is_sign_inf(x, fn):
  267. """Tests element-wise for inifinity with sign."""
  268. shape = F.shape(x)
  269. zeros_tensor = zeros(shape, mstype.float32)
  270. ones_tensor = ones(shape, mstype.float32)
  271. not_inf = F.isfinite(x)
  272. is_sign = fn(x, zeros_tensor)
  273. res = F.select(not_inf, zeros_tensor, ones_tensor)
  274. res = F.select(is_sign, res, zeros_tensor)
  275. return F.cast(res, mstype.bool_)
  276. def isposinf(x):
  277. """
  278. Tests element-wise for positive infinity, returns result as bool array.
  279. Note:
  280. Numpy argument `out` is not supported.
  281. Only np.float32 is currently supported.
  282. Args:
  283. x (Tensor): Input values.
  284. Returns:
  285. Tensor or scalar, true where `x` is positive infinity, false otherwise.
  286. This is a scalar if `x` is a scalar.
  287. Raises:
  288. TypeError: if the input is not a tensor.
  289. Supported Platforms:
  290. ``GPU`` ``CPU``
  291. Examples:
  292. >>> import mindspore.numpy as np
  293. >>> output = np.isposinf(np.array([-np.inf, 0., np.inf, np.nan], np.float32))
  294. >>> print(output)
  295. [False False True False]
  296. """
  297. _check_input_tensor(x)
  298. return _is_sign_inf(x, F.tensor_gt)
  299. def isneginf(x):
  300. """
  301. Tests element-wise for negative infinity, returns result as bool array.
  302. Note:
  303. Numpy argument `out` is not supported.
  304. Only np.float32 is currently supported.
  305. Args:
  306. x (Tensor): Input values.
  307. Returns:
  308. Tensor or scalar, true where `x` is negative infinity, false otherwise.
  309. This is a scalar if `x` is a scalar.
  310. Raises:
  311. TypeError: if the input is not a tensor.
  312. Supported Platforms:
  313. ``GPU`` ``CPU``
  314. Examples:
  315. >>> import mindspore.numpy as np
  316. >>> output = np.isneginf(np.array([-np.inf, 0., np.inf, np.nan], np.float32))
  317. >>> print(output)
  318. [ True False False False]
  319. """
  320. return _is_sign_inf(x, F.tensor_lt)
  321. def isscalar(element):
  322. """
  323. Returns True if the type of element is a scalar type.
  324. Note:
  325. Only object types recognized by the mindspore parser are supported,
  326. which includes objects, types, methods and functions defined within
  327. the scope of mindspore. Other built-in types are not supported.
  328. Args:
  329. element (any): Input argument, can be of any type and shape.
  330. Returns:
  331. Boolean, True if `element` is a scalar type, False if it is not.
  332. Raises:
  333. TypeError: if the type of `element` is not supported by mindspore parser.
  334. Supported Platforms:
  335. ``Ascend`` ``GPU`` ``CPU``
  336. Examples:
  337. >>> import mindspore.numpy as np
  338. >>> output = np.isscalar(3.1)
  339. >>> print(output)
  340. True
  341. >>> output = np.isscalar(np.array(3.1))
  342. >>> print(output)
  343. False
  344. >>> output = np.isscalar(False)
  345. >>> print(output)
  346. True
  347. >>> output = np.isscalar('numpy')
  348. >>> print(output)
  349. True
  350. """
  351. obj_type = F.typeof(element)
  352. return not isinstance(obj_type, Tensor) and _isscalar(obj_type)
  353. def isclose(a, b, rtol=1e-05, atol=1e-08, equal_nan=False):
  354. """
  355. Returns a boolean tensor where two tensors are element-wise equal within a tolerance.
  356. The tolerance values are positive, typically very small numbers. The relative
  357. difference (:math:`rtol * abs(b)`) and the absolute difference `atol` are added together
  358. to compare against the absolute difference between `a` and `b`.
  359. Note:
  360. For finite values, isclose uses the following equation to test whether two
  361. floating point values are equivalent.
  362. :math:`absolute(a - b) <= (atol + rtol * absolute(b))`
  363. On Ascend, input arrays containing inf or NaN are not supported.
  364. Args:
  365. a (Union[Tensor, list, tuple]): Input first tensor to compare.
  366. b (Union[Tensor, list, tuple]): Input second tensor to compare.
  367. rtol (numbers.Number): The relative tolerance parameter (see Note).
  368. atol (numbers.Number): The absolute tolerance parameter (see Note).
  369. equal_nan (bool): Whether to compare ``NaN`` as equal. If True, ``NaN`` in
  370. `a` will be considered equal to ``NaN`` in `b` in the output tensor.
  371. Returns:
  372. A ``bool`` tensor of where `a` and `b` are equal within the given tolerance.
  373. Raises:
  374. TypeError: If inputs have types not specified above.
  375. Supported Platforms:
  376. ``Ascend`` ``GPU`` ``CPU``
  377. Examples:
  378. >>> a = np.array([0,1,2,float('inf'),float('inf'),float('nan')])
  379. >>> b = np.array([0,1,-2,float('-inf'),float('inf'),float('nan')])
  380. >>> print(np.isclose(a, b))
  381. [ True True False False True False]
  382. >>> print(np.isclose(a, b, equal_nan=True))
  383. [ True True False False True True]
  384. """
  385. a, b = _to_tensor(a, b)
  386. if not isinstance(rtol, (int, float, bool)) or not isinstance(atol, (int, float, bool)):
  387. _raise_type_error("rtol and atol are expected to be numbers.")
  388. if not isinstance(equal_nan, bool):
  389. _raise_type_error("equal_nan is expected to be bool.")
  390. if _is_shape_empty(a.shape) or _is_shape_empty(b.shape):
  391. return empty(_infer_out_shape(a.shape, b.shape), dtype=mstype.bool_)
  392. rtol = _to_tensor(rtol).astype("float32")
  393. atol = _to_tensor(atol).astype("float32")
  394. res = absolute(a - b) <= (atol + rtol * absolute(b))
  395. # infs are treated as equal
  396. a_posinf = isposinf(a)
  397. b_posinf = isposinf(b)
  398. a_neginf = isneginf(a)
  399. b_neginf = isneginf(b)
  400. same_inf = F.logical_or(F.logical_and(a_posinf, b_posinf), F.logical_and(a_neginf, b_neginf))
  401. diff_inf = F.logical_or(F.logical_and(a_posinf, b_neginf), F.logical_and(a_neginf, b_posinf))
  402. res = F.logical_and(F.logical_or(res, same_inf), F.logical_not(diff_inf))
  403. both_nan = F.logical_and(_isnan(a), _isnan(b))
  404. if equal_nan:
  405. res = F.logical_or(both_nan, res)
  406. else:
  407. res = F.logical_and(F.logical_not(both_nan), res)
  408. return res
  409. def in1d(ar1, ar2, invert=False):
  410. """
  411. Tests whether each element of a 1-D array is also present in a second array.
  412. Returns a boolean array the same length as `ar1` that is True where an element
  413. of `ar1` is in `ar2` and False otherwise.
  414. Note:
  415. Numpy argument `assume_unique` is not supported since the implementation does
  416. not rely on the uniqueness of the input arrays.
  417. Args:
  418. ar1 (Union[int, float, bool, list, tuple, Tensor]): Input array with shape `(M,)`.
  419. ar2 (Union[int, float, bool, list, tuple, Tensor]): The values against which
  420. to test each value of `ar1`.
  421. invert (boolean, optional): If True, the values in the returned array are
  422. inverted (that is, False where an element of `ar1` is in `ar2` and True
  423. otherwise). Default is False.
  424. Returns:
  425. Tensor, with shape `(M,)`. The values ``ar1[in1d]`` are in `ar2`.
  426. Supported Platforms:
  427. ``Ascend`` ``GPU`` ``CPU``
  428. Examples:
  429. >>> test = np.array([0, 1, 2, 5, 0])
  430. >>> states = [0, 2]
  431. >>> mask = np.in1d(test, states)
  432. >>> print(mask)
  433. [ True False True False True]
  434. >>> mask = np.in1d(test, states, invert=True)
  435. >>> print(mask)
  436. [False True False True False]
  437. """
  438. ar1, ar2 = _to_tensor(ar1, ar2)
  439. ar1 = F.expand_dims(ar1.ravel(), -1)
  440. ar2 = ar2.ravel()
  441. included = F.equal(ar1, ar2)
  442. # F.reduce_sum only supports float
  443. res = F.reduce_sum(included.astype(mstype.float32), -1).astype(mstype.bool_)
  444. if invert:
  445. res = F.logical_not(res)
  446. return res
  447. def isin(element, test_elements, invert=False):
  448. """
  449. Calculates element in `test_elements`, broadcasting over `element` only. Returns a
  450. boolean array of the same shape as `element` that is True where an element of
  451. `element` is in `test_elements` and False otherwise.
  452. Note:
  453. Numpy argument `assume_unique` is not supported since the implementation does
  454. not rely on the uniqueness of the input arrays.
  455. Args:
  456. element (Union[int, float, bool, list, tuple, Tensor]): Input array.
  457. test_elements (Union[int, float, bool, list, tuple, Tensor]): The values against
  458. which to test each value of `element`.
  459. invert (boolean, optional): If True, the values in the returned array are
  460. inverted, as if calculating `element` not in `test_elements`. Default is False.
  461. Returns:
  462. Tensor, has the same shape as `element`. The values ``element[isin]`` are in
  463. `test_elements`.
  464. Supported Platforms:
  465. ``Ascend`` ``GPU`` ``CPU``
  466. Examples:
  467. >>> element = 2*np.arange(4).reshape((2, 2))
  468. >>> test_elements = [1, 2, 4, 8]
  469. >>> mask = np.isin(element, test_elements)
  470. >>> print(mask)
  471. [[False True]
  472. [ True False]]
  473. >>> mask = np.isin(element, test_elements, invert=True)
  474. >>> print(mask)
  475. [[ True False]
  476. [False True]]
  477. """
  478. element = _to_tensor(element)
  479. res = in1d(element, test_elements, invert=invert)
  480. return F.reshape(res, F.shape(element))
  481. def logical_not(a, dtype=None):
  482. """
  483. Computes the truth value of NOT `a` element-wise.
  484. Note:
  485. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`, and `extobj` are
  486. not supported.
  487. Args:
  488. a (Tensor): The input tensor whose dtype is bool.
  489. dtype (:class:`mindspore.dtype`, optional): Default: :class:`None`. Overrides the dtype of the
  490. output Tensor.
  491. Returns:
  492. Tensor or scalar.
  493. Boolean result with the same shape as `a` of the NOT operation on elements of `a`.
  494. This is a scalar if `a` is a scalar.
  495. Raises:
  496. TypeError: if the input is not a tensor or its dtype is not bool.
  497. Supported Platforms:
  498. ``Ascend`` ``GPU`` ``CPU``
  499. Examples:
  500. >>> import mindspore.numpy as np
  501. >>> a = np.array([True, False])
  502. >>> output = np.logical_not(a)
  503. >>> print(output)
  504. [False True]
  505. """
  506. return _apply_tensor_op(F.logical_not, a, dtype=dtype)
  507. def logical_or(x1, x2, dtype=None):
  508. """
  509. Computes the truth value of `x1` OR `x2` element-wise.
  510. Note:
  511. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`,
  512. and `extobj` are not supported.
  513. Args:
  514. x1 (Tensor): Input tensor.
  515. x2 (Tensor): Input tensor. If ``x1.shape != x2.shape``, they must be
  516. broadcastable to a common shape (which becomes the shape of the output).
  517. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  518. output Tensor.
  519. Returns:
  520. Tensor or scalar, element-wise comparison of `x1` and `x2`. Typically of type
  521. bool, unless ``dtype=object`` is passed. This is a scalar if both `x1` and `x2` are
  522. scalars.
  523. Supported Platforms:
  524. ``Ascend`` ``GPU`` ``CPU``
  525. Examples:
  526. >>> import mindspore.numpy as np
  527. >>> x1 = np.array([True, False])
  528. >>> x2 = np.array([False, True])
  529. >>> output = np.logical_or(x1, x2)
  530. >>> print(output)
  531. [ True True]
  532. """
  533. return _apply_tensor_op(F.logical_or, x1, x2, dtype=dtype)
  534. def logical_and(x1, x2, dtype=None):
  535. """
  536. Computes the truth value of `x1` AND `x2` element-wise.
  537. Note:
  538. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`,
  539. and `extobj` are not supported.
  540. Args:
  541. x1 (Tensor): Input tensor.
  542. x2 (Tensor): Input tensor. If ``x1.shape != x2.shape``, they must be
  543. broadcastable to a common shape (which becomes the shape of the output).
  544. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  545. output Tensor.
  546. Returns:
  547. Tensor or scalar.
  548. Boolean result of the logical AND operation applied to the elements of `x1` and `x2`;
  549. the shape is determined by broadcasting. This is a scalar if both `x1` and `x2` are scalars.
  550. Supported Platforms:
  551. ``Ascend`` ``GPU`` ``CPU``
  552. Examples:
  553. >>> import mindspore.numpy as np
  554. >>> x1 = np.array([True, False])
  555. >>> x2 = np.array([False, False])
  556. >>> output = np.logical_and(x1, x2)
  557. >>> print(output)
  558. [False False]
  559. """
  560. return _apply_tensor_op(F.logical_and, x1, x2, dtype=dtype)
  561. def logical_xor(x1, x2, dtype=None):
  562. """
  563. Computes the truth value of `x1` XOR `x2`, element-wise.
  564. Note:
  565. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`,
  566. and `extobj` are not supported.
  567. Args:
  568. x1 (Tensor): Input tensor.
  569. x2 (Tensor): Input tensor. If ``x1.shape != x2.shape``, they must be
  570. broadcastable to a common shape (which becomes the shape of the output).
  571. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  572. output Tensor.
  573. Returns:
  574. Tensor or scalar.
  575. Boolean result of the logical AND operation applied to the elements of `x1` and `x2`;
  576. the shape is determined by broadcasting. This is a scalar if both `x1` and `x2` are scalars.
  577. Supported Platforms:
  578. ``Ascend`` ``GPU`` ``CPU``
  579. Examples:
  580. >>> import mindspore.numpy as np
  581. >>> x1 = np.array([True, False])
  582. >>> x2 = np.array([False, False])
  583. >>> output = np.logical_xor(x1, x2)
  584. >>> print(output)
  585. [True False]
  586. """
  587. _check_input_tensor(x1)
  588. _check_input_tensor(x2)
  589. y1 = F.logical_or(x1, x2)
  590. y2 = F.logical_or(F.logical_not(x1), F.logical_not(x2))
  591. return _apply_tensor_op(F.logical_and, y1, y2, dtype=dtype)
  592. def array_equal(a1, a2, equal_nan=False):
  593. """
  594. Returns `True` if input arrays have same shapes and all elements equal.
  595. Note:
  596. In mindspore, a bool tensor is returned instead, since in Graph mode, the
  597. value cannot be traced and computed at compile time.
  598. Since on Ascend, :class:`nan` is treated differently, currently the argument
  599. `equal_nan` is not supported on Ascend.
  600. Args:
  601. a1/a2 (Union[int, float, bool, list, tuple, Tensor]): Input arrays.
  602. equal_nan (bool): Whether to compare NaN's as equal.
  603. Returns:
  604. Scalar bool tensor, value is `True` if inputs are equal, `False` otherwise.
  605. Raises:
  606. TypeError: If inputs have types not specified above.
  607. Supported Platforms:
  608. ``GPU`` ``CPU`` ``Ascend``
  609. Examples:
  610. >>> import mindspore.numpy as np
  611. >>> a = [0,1,2]
  612. >>> b = [[0,1,2], [0,1,2]]
  613. >>> print(np.array_equal(a,b))
  614. False
  615. """
  616. a1 = asarray(a1)
  617. a2 = asarray(a2)
  618. if not isinstance(equal_nan, bool):
  619. _raise_type_error("equal_nan must be bool.")
  620. if a1.shape == a2.shape:
  621. res = equal(a1, a2)
  622. if equal_nan:
  623. res = logical_or(res, logical_and(isnan(a1), isnan(a2)))
  624. return res.all()
  625. return _to_tensor(False)
  626. def array_equiv(a1, a2):
  627. """
  628. Returns `True` if input arrays are shape consistent and all elements equal.
  629. Shape consistent means they are either the same shape, or one input array can
  630. be broadcasted to create the same shape as the other one.
  631. Note:
  632. In mindspore, a bool tensor is returned instead, since in Graph mode, the
  633. value cannot be traced and computed at compile time.
  634. Args:
  635. a1/a2 (Union[int, float, bool, list, tuple, Tensor]): Input arrays.
  636. Returns:
  637. Scalar bool tensor, value is `True` if inputs are equivalent, `False` otherwise.
  638. Raises:
  639. TypeError: If inputs have types not specified above.
  640. Supported Platforms:
  641. ``Ascend`` ``GPU`` ``CPU``
  642. Examples:
  643. >>> import mindspore.numpy as np
  644. >>> a = [0,1,2]
  645. >>> b = [[0,1,2], [0,1,2]]
  646. >>> print(np.array_equiv(a,b))
  647. True
  648. """
  649. a1 = asarray(a1)
  650. a2 = asarray(a2)
  651. if _can_broadcast(a1.shape, a2.shape):
  652. return equal(a1, a2).all()
  653. return _to_tensor(False)
  654. def signbit(x, dtype=None):
  655. """
  656. Returns element-wise True where signbit is set (less than zero).
  657. Note:
  658. Numpy arguments `out`, `where`, `casting`, `order`, `subok`, `signature`, and
  659. `extobj` are not supported.
  660. Args:
  661. x (Union[int, float, bool, list, tuple, Tensor]): The input value(s).
  662. dtype (:class:`mindspore.dtype`, optional): defaults to None. Overrides the dtype of the
  663. output Tensor.
  664. Returns:
  665. Tensor.
  666. Raises:
  667. TypeError: If input is not array_like or `dtype` is not `None` or `bool`.
  668. Supported Platforms:
  669. ``Ascend`` ``GPU`` ``CPU``
  670. Examples:
  671. >>> import mindspore.numpy as np
  672. >>> x = np.array([1, -2.3, 2.1]).astype('float32')
  673. >>> output = np.signbit(x)
  674. >>> print(output)
  675. [False True False]
  676. """
  677. if dtype is not None and not _check_same_type(dtype, mstype.bool_):
  678. _raise_type_error("Casting was not allowed for signbit.")
  679. x = _to_tensor(x)
  680. res = F.less(x, 0)
  681. if dtype is not None and not _check_same_type(F.dtype(res), dtype):
  682. res = F.cast(res, dtype)
  683. return res
  684. def sometrue(a, axis=None, keepdims=False):
  685. """
  686. Tests whether any array element along a given axis evaluates to True.
  687. Returns single boolean unless axis is not None
  688. Args:
  689. a (Union[int, float, bool, list, tuple, Tensor]): Input tensor or object that can be converted to an array.
  690. axis (Union[None, int, tuple(int)]): Axis or axes along which a logical OR reduction is
  691. performed. Default: None.
  692. If None, perform a logical OR over all the dimensions of the input array.
  693. If negative, it counts from the last to the first axis.
  694. If tuple of integers, a reduction is performed on multiple axes, instead of a single axis or
  695. all the axes as before.
  696. keepdims (bool): Default: False.
  697. If True, the axes which are reduced are left in the result as dimensions with size one.
  698. With this option, the result will broadcast correctly against the input array.
  699. If the default value is passed, then keepdims will not be passed through to the any method of
  700. sub-classes of ndarray, however any non-default value will be. If the sub-class method does not
  701. implement keepdims any exceptions will be raised.
  702. Returns:
  703. Returns single boolean unless axis is not None
  704. Raises:
  705. TypeError: If input is not array_like or `axis` is not int or tuple of integers or
  706. `keepdims` is not integer or `initial` is not scalar.
  707. ValueError: If any axis is out of range or duplicate axes exist.
  708. Supported Platforms:
  709. ``Ascend`` ``GPU`` ``CPU``
  710. Examples:
  711. >>> import mindspore.numpy as np
  712. >>> x = np.array([1, -2.3, 2.1]).astype('float32')
  713. >>> output = np.sometrue(x)
  714. >>> print(output)
  715. True
  716. """
  717. if not isinstance(keepdims, int):
  718. _raise_type_error("integer argument expected, but got ", keepdims)
  719. if axis is not None:
  720. _check_axis_type(axis, True, True, False)
  721. axis = _canonicalize_axis(axis, a.ndim)
  722. a = _to_tensor(a)
  723. keepdims = keepdims not in (0, False)
  724. return F.not_equal(a, 0).any(axis, keepdims)