You can not select more than 25 topics Topics must start with a chinese character,a letter or number, can include dashes ('-') and can be up to 35 characters long.

array_ops.py 293 kB

5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
4 years ago
optimize the comment and log description 修改: ops/operations/_inner_ops.py 修改: ops/operations/_quant_ops.py 修改: ops/operations/array_ops.py 修改: ops/operations/comm_ops.py 修改: ops/operations/math_ops.py 修改: ops/operations/quantum_ops.py 修改: ops/operations/rl_ops.py 修改: ops/operations/sponge_ops.py 修改: ops/operations/sponge_update_ops.py 修改: train/__init__.py 修改: common/tensor.py 修改: train/serialization.py 修改: ccsrc/pipeline/jit/parse/parse.h 修改: explainer/benchmark/_attribution/metric.py 修改: ops/composite/multitype_ops/_constexpr_utils.py 修改: ops/operations/comm_ops.py 修改: RELEASE.md 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/concat_offset_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/dynamic_shape_cpu_kernel.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/reshape_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/tile_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/transpose_info.cc 修改: mindspore/ccsrc/frontend/parallel/strategy.h 修改: mindspore/common/tensor.py 修改: mindspore/core/abstract/prim_arrays.cc 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/core/ops/logical_and.h 修改: mindspore/core/ops/logical_not.h 修改: mindspore/core/ops/logical_or.h 修改: mindspore/core/ops/reduce_all.h 修改: mindspore/core/ops/reduce_any.h 修改: mindspore/lite/src/runtime/kernel/arm/fp32_grad/sgd.cc 修改: mindspore/nn/layer/quant.py 修改: mindspore/nn/optim/sgd.py 修改: mindspore/nn/sparse/sparse.py 修改: mindspore/numpy/array_creations.py 修改: mindspore/numpy/array_ops.py 修改: mindspore/numpy/logic_ops.py 修改: mindspore/numpy/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/train/_utils.py 修改: tests/ut/python/model/test_lenet_core_after_exception.py 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ctcloss_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_pull_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_push_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_filter_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_input_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_ftrl_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_lazy_adam_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/rolling_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/scatter_arithmetic_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/arrays/split_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/math/broadcast_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/nn/conv2d_grad_input_gpu_kernel.h 修改: mindspore/ccsrc/fl/server/server.cc 修改: mindspore/ccsrc/frontend/optimizer/ad/kpynative.cc 修改: mindspore/ccsrc/frontend/optimizer/irpass/incorporate_getitem.h 修改: mindspore/ccsrc/frontend/optimizer/irpass/inline.h 修改: mindspore/ccsrc/minddata/dataset/core/device_tensor.cc 修改: mindspore/ccsrc/minddata/dataset/core/tensor.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/emnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/mnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/qmnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/ir/datasetops/dataset_node.cc 修改: mindspore/ccsrc/minddata/dataset/engine/opt/pre/epoch_ctrl_pass.cc 修改: mindspore/ccsrc/minddata/dataset/kernels/image/lite_image_utils.cc 修改: mindspore/ccsrc/pipeline/jit/action.cc 修改: mindspore/ccsrc/pipeline/jit/static_analysis/evaluator.cc 修改: mindspore/ccsrc/runtime/device/ascend/executor/tiling/op_tiling_adapter.cc 修改: mindspore/compression/quant/quant_utils.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/dataset/engine/validators.py 修改: mindspore/lite/micro/coder/opcoders/nnacl/fp32/affine_fp32_coder.cc 修改: mindspore/lite/micro/coder/opcoders/nnacl/int8/affine_int8_coder.cc 修改: mindspore/lite/src/runtime/kernel/ascend310/src/custom_kernel.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/matmul.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/strassen.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/lite/tools/optimizer/fisson/fisson_util.cc 修改: mindspore/ops/composite/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/math_ops.py 修改: mindspore/ops/operations/other_ops.py 修改: mindspore/boost/boost_cell_wrapper.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/common/trans.cc 修改: mindspore/ccsrc/frontend/parallel/cache_embedding/cache_embedding.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/lite/src/common/log_util.h 修改: mindspore/nn/wrap/loss_scale.py 修改: mindspore/parallel/nn/moe.py 修改: tests/mindspore_test_framework/mindspore_test.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/core/ops/conv2d.cc 修改: tests/ut/python/model/test_lenet_core_after_exception.py
4 years ago
5 years ago
5 years ago
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
6 years ago
5 years ago
6 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
6 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
optimize the comment and log description 修改: ops/operations/_inner_ops.py 修改: ops/operations/_quant_ops.py 修改: ops/operations/array_ops.py 修改: ops/operations/comm_ops.py 修改: ops/operations/math_ops.py 修改: ops/operations/quantum_ops.py 修改: ops/operations/rl_ops.py 修改: ops/operations/sponge_ops.py 修改: ops/operations/sponge_update_ops.py 修改: train/__init__.py 修改: common/tensor.py 修改: train/serialization.py 修改: ccsrc/pipeline/jit/parse/parse.h 修改: explainer/benchmark/_attribution/metric.py 修改: ops/composite/multitype_ops/_constexpr_utils.py 修改: ops/operations/comm_ops.py 修改: RELEASE.md 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/concat_offset_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/dynamic_shape_cpu_kernel.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/reshape_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/tile_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/transpose_info.cc 修改: mindspore/ccsrc/frontend/parallel/strategy.h 修改: mindspore/common/tensor.py 修改: mindspore/core/abstract/prim_arrays.cc 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/core/ops/logical_and.h 修改: mindspore/core/ops/logical_not.h 修改: mindspore/core/ops/logical_or.h 修改: mindspore/core/ops/reduce_all.h 修改: mindspore/core/ops/reduce_any.h 修改: mindspore/lite/src/runtime/kernel/arm/fp32_grad/sgd.cc 修改: mindspore/nn/layer/quant.py 修改: mindspore/nn/optim/sgd.py 修改: mindspore/nn/sparse/sparse.py 修改: mindspore/numpy/array_creations.py 修改: mindspore/numpy/array_ops.py 修改: mindspore/numpy/logic_ops.py 修改: mindspore/numpy/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/train/_utils.py 修改: tests/ut/python/model/test_lenet_core_after_exception.py 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ctcloss_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_pull_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_push_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_filter_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_input_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_ftrl_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_lazy_adam_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/rolling_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/scatter_arithmetic_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/arrays/split_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/math/broadcast_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/nn/conv2d_grad_input_gpu_kernel.h 修改: mindspore/ccsrc/fl/server/server.cc 修改: mindspore/ccsrc/frontend/optimizer/ad/kpynative.cc 修改: mindspore/ccsrc/frontend/optimizer/irpass/incorporate_getitem.h 修改: mindspore/ccsrc/frontend/optimizer/irpass/inline.h 修改: mindspore/ccsrc/minddata/dataset/core/device_tensor.cc 修改: mindspore/ccsrc/minddata/dataset/core/tensor.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/emnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/mnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/qmnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/ir/datasetops/dataset_node.cc 修改: mindspore/ccsrc/minddata/dataset/engine/opt/pre/epoch_ctrl_pass.cc 修改: mindspore/ccsrc/minddata/dataset/kernels/image/lite_image_utils.cc 修改: mindspore/ccsrc/pipeline/jit/action.cc 修改: mindspore/ccsrc/pipeline/jit/static_analysis/evaluator.cc 修改: mindspore/ccsrc/runtime/device/ascend/executor/tiling/op_tiling_adapter.cc 修改: mindspore/compression/quant/quant_utils.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/dataset/engine/validators.py 修改: mindspore/lite/micro/coder/opcoders/nnacl/fp32/affine_fp32_coder.cc 修改: mindspore/lite/micro/coder/opcoders/nnacl/int8/affine_int8_coder.cc 修改: mindspore/lite/src/runtime/kernel/ascend310/src/custom_kernel.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/matmul.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/strassen.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/lite/tools/optimizer/fisson/fisson_util.cc 修改: mindspore/ops/composite/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/math_ops.py 修改: mindspore/ops/operations/other_ops.py 修改: mindspore/boost/boost_cell_wrapper.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/common/trans.cc 修改: mindspore/ccsrc/frontend/parallel/cache_embedding/cache_embedding.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/lite/src/common/log_util.h 修改: mindspore/nn/wrap/loss_scale.py 修改: mindspore/parallel/nn/moe.py 修改: tests/mindspore_test_framework/mindspore_test.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/core/ops/conv2d.cc 修改: tests/ut/python/model/test_lenet_core_after_exception.py
4 years ago
optimize the comment and log description 修改: ops/operations/_inner_ops.py 修改: ops/operations/_quant_ops.py 修改: ops/operations/array_ops.py 修改: ops/operations/comm_ops.py 修改: ops/operations/math_ops.py 修改: ops/operations/quantum_ops.py 修改: ops/operations/rl_ops.py 修改: ops/operations/sponge_ops.py 修改: ops/operations/sponge_update_ops.py 修改: train/__init__.py 修改: common/tensor.py 修改: train/serialization.py 修改: ccsrc/pipeline/jit/parse/parse.h 修改: explainer/benchmark/_attribution/metric.py 修改: ops/composite/multitype_ops/_constexpr_utils.py 修改: ops/operations/comm_ops.py 修改: RELEASE.md 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/concat_offset_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/dynamic_shape_cpu_kernel.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/reshape_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/tile_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/transpose_info.cc 修改: mindspore/ccsrc/frontend/parallel/strategy.h 修改: mindspore/common/tensor.py 修改: mindspore/core/abstract/prim_arrays.cc 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/core/ops/logical_and.h 修改: mindspore/core/ops/logical_not.h 修改: mindspore/core/ops/logical_or.h 修改: mindspore/core/ops/reduce_all.h 修改: mindspore/core/ops/reduce_any.h 修改: mindspore/lite/src/runtime/kernel/arm/fp32_grad/sgd.cc 修改: mindspore/nn/layer/quant.py 修改: mindspore/nn/optim/sgd.py 修改: mindspore/nn/sparse/sparse.py 修改: mindspore/numpy/array_creations.py 修改: mindspore/numpy/array_ops.py 修改: mindspore/numpy/logic_ops.py 修改: mindspore/numpy/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/train/_utils.py 修改: tests/ut/python/model/test_lenet_core_after_exception.py 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ctcloss_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_pull_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_push_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_filter_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_input_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_ftrl_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_lazy_adam_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/rolling_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/scatter_arithmetic_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/arrays/split_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/math/broadcast_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/nn/conv2d_grad_input_gpu_kernel.h 修改: mindspore/ccsrc/fl/server/server.cc 修改: mindspore/ccsrc/frontend/optimizer/ad/kpynative.cc 修改: mindspore/ccsrc/frontend/optimizer/irpass/incorporate_getitem.h 修改: mindspore/ccsrc/frontend/optimizer/irpass/inline.h 修改: mindspore/ccsrc/minddata/dataset/core/device_tensor.cc 修改: mindspore/ccsrc/minddata/dataset/core/tensor.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/emnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/mnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/qmnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/ir/datasetops/dataset_node.cc 修改: mindspore/ccsrc/minddata/dataset/engine/opt/pre/epoch_ctrl_pass.cc 修改: mindspore/ccsrc/minddata/dataset/kernels/image/lite_image_utils.cc 修改: mindspore/ccsrc/pipeline/jit/action.cc 修改: mindspore/ccsrc/pipeline/jit/static_analysis/evaluator.cc 修改: mindspore/ccsrc/runtime/device/ascend/executor/tiling/op_tiling_adapter.cc 修改: mindspore/compression/quant/quant_utils.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/dataset/engine/validators.py 修改: mindspore/lite/micro/coder/opcoders/nnacl/fp32/affine_fp32_coder.cc 修改: mindspore/lite/micro/coder/opcoders/nnacl/int8/affine_int8_coder.cc 修改: mindspore/lite/src/runtime/kernel/ascend310/src/custom_kernel.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/matmul.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/strassen.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/lite/tools/optimizer/fisson/fisson_util.cc 修改: mindspore/ops/composite/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/math_ops.py 修改: mindspore/ops/operations/other_ops.py 修改: mindspore/boost/boost_cell_wrapper.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/common/trans.cc 修改: mindspore/ccsrc/frontend/parallel/cache_embedding/cache_embedding.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/lite/src/common/log_util.h 修改: mindspore/nn/wrap/loss_scale.py 修改: mindspore/parallel/nn/moe.py 修改: tests/mindspore_test_framework/mindspore_test.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/core/ops/conv2d.cc 修改: tests/ut/python/model/test_lenet_core_after_exception.py
4 years ago
optimize the comment and log description 修改: ops/operations/_inner_ops.py 修改: ops/operations/_quant_ops.py 修改: ops/operations/array_ops.py 修改: ops/operations/comm_ops.py 修改: ops/operations/math_ops.py 修改: ops/operations/quantum_ops.py 修改: ops/operations/rl_ops.py 修改: ops/operations/sponge_ops.py 修改: ops/operations/sponge_update_ops.py 修改: train/__init__.py 修改: common/tensor.py 修改: train/serialization.py 修改: ccsrc/pipeline/jit/parse/parse.h 修改: explainer/benchmark/_attribution/metric.py 修改: ops/composite/multitype_ops/_constexpr_utils.py 修改: ops/operations/comm_ops.py 修改: RELEASE.md 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/concat_offset_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/dynamic_shape_cpu_kernel.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/reshape_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/tile_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/transpose_info.cc 修改: mindspore/ccsrc/frontend/parallel/strategy.h 修改: mindspore/common/tensor.py 修改: mindspore/core/abstract/prim_arrays.cc 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/core/ops/logical_and.h 修改: mindspore/core/ops/logical_not.h 修改: mindspore/core/ops/logical_or.h 修改: mindspore/core/ops/reduce_all.h 修改: mindspore/core/ops/reduce_any.h 修改: mindspore/lite/src/runtime/kernel/arm/fp32_grad/sgd.cc 修改: mindspore/nn/layer/quant.py 修改: mindspore/nn/optim/sgd.py 修改: mindspore/nn/sparse/sparse.py 修改: mindspore/numpy/array_creations.py 修改: mindspore/numpy/array_ops.py 修改: mindspore/numpy/logic_ops.py 修改: mindspore/numpy/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/train/_utils.py 修改: tests/ut/python/model/test_lenet_core_after_exception.py 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ctcloss_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_pull_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_push_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_filter_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_input_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_ftrl_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_lazy_adam_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/rolling_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/scatter_arithmetic_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/arrays/split_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/math/broadcast_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/nn/conv2d_grad_input_gpu_kernel.h 修改: mindspore/ccsrc/fl/server/server.cc 修改: mindspore/ccsrc/frontend/optimizer/ad/kpynative.cc 修改: mindspore/ccsrc/frontend/optimizer/irpass/incorporate_getitem.h 修改: mindspore/ccsrc/frontend/optimizer/irpass/inline.h 修改: mindspore/ccsrc/minddata/dataset/core/device_tensor.cc 修改: mindspore/ccsrc/minddata/dataset/core/tensor.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/emnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/mnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/qmnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/ir/datasetops/dataset_node.cc 修改: mindspore/ccsrc/minddata/dataset/engine/opt/pre/epoch_ctrl_pass.cc 修改: mindspore/ccsrc/minddata/dataset/kernels/image/lite_image_utils.cc 修改: mindspore/ccsrc/pipeline/jit/action.cc 修改: mindspore/ccsrc/pipeline/jit/static_analysis/evaluator.cc 修改: mindspore/ccsrc/runtime/device/ascend/executor/tiling/op_tiling_adapter.cc 修改: mindspore/compression/quant/quant_utils.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/dataset/engine/validators.py 修改: mindspore/lite/micro/coder/opcoders/nnacl/fp32/affine_fp32_coder.cc 修改: mindspore/lite/micro/coder/opcoders/nnacl/int8/affine_int8_coder.cc 修改: mindspore/lite/src/runtime/kernel/ascend310/src/custom_kernel.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/matmul.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/strassen.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/lite/tools/optimizer/fisson/fisson_util.cc 修改: mindspore/ops/composite/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/math_ops.py 修改: mindspore/ops/operations/other_ops.py 修改: mindspore/boost/boost_cell_wrapper.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/common/trans.cc 修改: mindspore/ccsrc/frontend/parallel/cache_embedding/cache_embedding.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/lite/src/common/log_util.h 修改: mindspore/nn/wrap/loss_scale.py 修改: mindspore/parallel/nn/moe.py 修改: tests/mindspore_test_framework/mindspore_test.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/core/ops/conv2d.cc 修改: tests/ut/python/model/test_lenet_core_after_exception.py
4 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
5 years ago
4 years ago
optimize the comment and log description 修改: ops/operations/_inner_ops.py 修改: ops/operations/_quant_ops.py 修改: ops/operations/array_ops.py 修改: ops/operations/comm_ops.py 修改: ops/operations/math_ops.py 修改: ops/operations/quantum_ops.py 修改: ops/operations/rl_ops.py 修改: ops/operations/sponge_ops.py 修改: ops/operations/sponge_update_ops.py 修改: train/__init__.py 修改: common/tensor.py 修改: train/serialization.py 修改: ccsrc/pipeline/jit/parse/parse.h 修改: explainer/benchmark/_attribution/metric.py 修改: ops/composite/multitype_ops/_constexpr_utils.py 修改: ops/operations/comm_ops.py 修改: RELEASE.md 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/concat_offset_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/dynamic_shape_cpu_kernel.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/reshape_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/tile_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/transpose_info.cc 修改: mindspore/ccsrc/frontend/parallel/strategy.h 修改: mindspore/common/tensor.py 修改: mindspore/core/abstract/prim_arrays.cc 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/core/ops/logical_and.h 修改: mindspore/core/ops/logical_not.h 修改: mindspore/core/ops/logical_or.h 修改: mindspore/core/ops/reduce_all.h 修改: mindspore/core/ops/reduce_any.h 修改: mindspore/lite/src/runtime/kernel/arm/fp32_grad/sgd.cc 修改: mindspore/nn/layer/quant.py 修改: mindspore/nn/optim/sgd.py 修改: mindspore/nn/sparse/sparse.py 修改: mindspore/numpy/array_creations.py 修改: mindspore/numpy/array_ops.py 修改: mindspore/numpy/logic_ops.py 修改: mindspore/numpy/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/train/_utils.py 修改: tests/ut/python/model/test_lenet_core_after_exception.py 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ctcloss_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_pull_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_push_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_filter_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_input_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_ftrl_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_lazy_adam_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/rolling_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/scatter_arithmetic_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/arrays/split_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/math/broadcast_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/nn/conv2d_grad_input_gpu_kernel.h 修改: mindspore/ccsrc/fl/server/server.cc 修改: mindspore/ccsrc/frontend/optimizer/ad/kpynative.cc 修改: mindspore/ccsrc/frontend/optimizer/irpass/incorporate_getitem.h 修改: mindspore/ccsrc/frontend/optimizer/irpass/inline.h 修改: mindspore/ccsrc/minddata/dataset/core/device_tensor.cc 修改: mindspore/ccsrc/minddata/dataset/core/tensor.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/emnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/mnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/qmnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/ir/datasetops/dataset_node.cc 修改: mindspore/ccsrc/minddata/dataset/engine/opt/pre/epoch_ctrl_pass.cc 修改: mindspore/ccsrc/minddata/dataset/kernels/image/lite_image_utils.cc 修改: mindspore/ccsrc/pipeline/jit/action.cc 修改: mindspore/ccsrc/pipeline/jit/static_analysis/evaluator.cc 修改: mindspore/ccsrc/runtime/device/ascend/executor/tiling/op_tiling_adapter.cc 修改: mindspore/compression/quant/quant_utils.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/dataset/engine/validators.py 修改: mindspore/lite/micro/coder/opcoders/nnacl/fp32/affine_fp32_coder.cc 修改: mindspore/lite/micro/coder/opcoders/nnacl/int8/affine_int8_coder.cc 修改: mindspore/lite/src/runtime/kernel/ascend310/src/custom_kernel.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/matmul.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/strassen.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/lite/tools/optimizer/fisson/fisson_util.cc 修改: mindspore/ops/composite/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/math_ops.py 修改: mindspore/ops/operations/other_ops.py 修改: mindspore/boost/boost_cell_wrapper.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/common/trans.cc 修改: mindspore/ccsrc/frontend/parallel/cache_embedding/cache_embedding.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/lite/src/common/log_util.h 修改: mindspore/nn/wrap/loss_scale.py 修改: mindspore/parallel/nn/moe.py 修改: tests/mindspore_test_framework/mindspore_test.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/core/ops/conv2d.cc 修改: tests/ut/python/model/test_lenet_core_after_exception.py
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
optimize the comment and log description 修改: ops/operations/_inner_ops.py 修改: ops/operations/_quant_ops.py 修改: ops/operations/array_ops.py 修改: ops/operations/comm_ops.py 修改: ops/operations/math_ops.py 修改: ops/operations/quantum_ops.py 修改: ops/operations/rl_ops.py 修改: ops/operations/sponge_ops.py 修改: ops/operations/sponge_update_ops.py 修改: train/__init__.py 修改: common/tensor.py 修改: train/serialization.py 修改: ccsrc/pipeline/jit/parse/parse.h 修改: explainer/benchmark/_attribution/metric.py 修改: ops/composite/multitype_ops/_constexpr_utils.py 修改: ops/operations/comm_ops.py 修改: RELEASE.md 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/concat_offset_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/dynamic_shape_cpu_kernel.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/reshape_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/tile_info.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/transpose_info.cc 修改: mindspore/ccsrc/frontend/parallel/strategy.h 修改: mindspore/common/tensor.py 修改: mindspore/core/abstract/prim_arrays.cc 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/core/ops/logical_and.h 修改: mindspore/core/ops/logical_not.h 修改: mindspore/core/ops/logical_or.h 修改: mindspore/core/ops/reduce_all.h 修改: mindspore/core/ops/reduce_any.h 修改: mindspore/lite/src/runtime/kernel/arm/fp32_grad/sgd.cc 修改: mindspore/nn/layer/quant.py 修改: mindspore/nn/optim/sgd.py 修改: mindspore/nn/sparse/sparse.py 修改: mindspore/numpy/array_creations.py 修改: mindspore/numpy/array_ops.py 修改: mindspore/numpy/logic_ops.py 修改: mindspore/numpy/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/train/_utils.py 修改: tests/ut/python/model/test_lenet_core_after_exception.py 修改: mindspore/_extends/parse/standard_method.py 修改: mindspore/ops/operations/rl_ops.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/core/ops/conv2d.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ctcloss_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_pull_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/fl/fused_push_weight_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_filter_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/mkldnn/conv2d_grad_input_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_ftrl_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/ps/sparse_apply_lazy_adam_ps_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/rolling_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/scatter_arithmetic_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/arrays/split_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/math/broadcast_gpu_kernel.h 修改: mindspore/ccsrc/backend/kernel_compiler/gpu/nn/conv2d_grad_input_gpu_kernel.h 修改: mindspore/ccsrc/fl/server/server.cc 修改: mindspore/ccsrc/frontend/optimizer/ad/kpynative.cc 修改: mindspore/ccsrc/frontend/optimizer/irpass/incorporate_getitem.h 修改: mindspore/ccsrc/frontend/optimizer/irpass/inline.h 修改: mindspore/ccsrc/minddata/dataset/core/device_tensor.cc 修改: mindspore/ccsrc/minddata/dataset/core/tensor.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/emnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/mnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/datasetops/source/qmnist_op.cc 修改: mindspore/ccsrc/minddata/dataset/engine/ir/datasetops/dataset_node.cc 修改: mindspore/ccsrc/minddata/dataset/engine/opt/pre/epoch_ctrl_pass.cc 修改: mindspore/ccsrc/minddata/dataset/kernels/image/lite_image_utils.cc 修改: mindspore/ccsrc/pipeline/jit/action.cc 修改: mindspore/ccsrc/pipeline/jit/static_analysis/evaluator.cc 修改: mindspore/ccsrc/runtime/device/ascend/executor/tiling/op_tiling_adapter.cc 修改: mindspore/compression/quant/quant_utils.py 修改: mindspore/core/abstract/prim_nn.cc 修改: mindspore/dataset/engine/validators.py 修改: mindspore/lite/micro/coder/opcoders/nnacl/fp32/affine_fp32_coder.cc 修改: mindspore/lite/micro/coder/opcoders/nnacl/int8/affine_int8_coder.cc 修改: mindspore/lite/src/runtime/kernel/ascend310/src/custom_kernel.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/matmul.cc 修改: mindspore/lite/src/runtime/kernel/opencl/kernel/strassen.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/lite/tools/optimizer/fisson/fisson_util.cc 修改: mindspore/ops/composite/math_ops.py 修改: mindspore/ops/operations/_inner_ops.py 修改: mindspore/ops/operations/array_ops.py 修改: mindspore/ops/operations/math_ops.py 修改: mindspore/ops/operations/other_ops.py 修改: mindspore/boost/boost_cell_wrapper.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/update_cache_cpu_kernel.cc 修改: mindspore/ccsrc/common/trans.cc 修改: mindspore/ccsrc/frontend/parallel/cache_embedding/cache_embedding.cc 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/lite/src/common/log_util.h 修改: mindspore/nn/wrap/loss_scale.py 修改: mindspore/parallel/nn/moe.py 修改: tests/mindspore_test_framework/mindspore_test.py 修改: mindspore/ccsrc/backend/kernel_compiler/cpu/split_cpu_kernel.cc 修改: mindspore/lite/tools/common/graph_util.h 修改: mindspore/ccsrc/frontend/parallel/ops_info/gather_info.cc 修改: mindspore/core/ops/conv2d.cc 修改: tests/ut/python/model/test_lenet_core_after_exception.py
4 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296129712981299130013011302130313041305130613071308130913101311131213131314131513161317131813191320132113221323132413251326132713281329133013311332133313341335133613371338133913401341134213431344134513461347134813491350135113521353135413551356135713581359136013611362136313641365136613671368136913701371137213731374137513761377137813791380138113821383138413851386138713881389139013911392139313941395139613971398139914001401140214031404140514061407140814091410141114121413141414151416141714181419142014211422142314241425142614271428142914301431143214331434143514361437143814391440144114421443144414451446144714481449145014511452145314541455145614571458145914601461146214631464146514661467146814691470147114721473147414751476147714781479148014811482148314841485148614871488148914901491149214931494149514961497149814991500150115021503150415051506150715081509151015111512151315141515151615171518151915201521152215231524152515261527152815291530153115321533153415351536153715381539154015411542154315441545154615471548154915501551155215531554155515561557155815591560156115621563156415651566156715681569157015711572157315741575157615771578157915801581158215831584158515861587158815891590159115921593159415951596159715981599160016011602160316041605160616071608160916101611161216131614161516161617161816191620162116221623162416251626162716281629163016311632163316341635163616371638163916401641164216431644164516461647164816491650165116521653165416551656165716581659166016611662166316641665166616671668166916701671167216731674167516761677167816791680168116821683168416851686168716881689169016911692169316941695169616971698169917001701170217031704170517061707170817091710171117121713171417151716171717181719172017211722172317241725172617271728172917301731173217331734173517361737173817391740174117421743174417451746174717481749175017511752175317541755175617571758175917601761176217631764176517661767176817691770177117721773177417751776177717781779178017811782178317841785178617871788178917901791179217931794179517961797179817991800180118021803180418051806180718081809181018111812181318141815181618171818181918201821182218231824182518261827182818291830183118321833183418351836183718381839184018411842184318441845184618471848184918501851185218531854185518561857185818591860186118621863186418651866186718681869187018711872187318741875187618771878187918801881188218831884188518861887188818891890189118921893189418951896189718981899190019011902190319041905190619071908190919101911191219131914191519161917191819191920192119221923192419251926192719281929193019311932193319341935193619371938193919401941194219431944194519461947194819491950195119521953195419551956195719581959196019611962196319641965196619671968196919701971197219731974197519761977197819791980198119821983198419851986198719881989199019911992199319941995199619971998199920002001200220032004200520062007200820092010201120122013201420152016201720182019202020212022202320242025202620272028202920302031203220332034203520362037203820392040204120422043204420452046204720482049205020512052205320542055205620572058205920602061206220632064206520662067206820692070207120722073207420752076207720782079208020812082208320842085208620872088208920902091209220932094209520962097209820992100210121022103210421052106210721082109211021112112211321142115211621172118211921202121212221232124212521262127212821292130213121322133213421352136213721382139214021412142214321442145214621472148214921502151215221532154215521562157215821592160216121622163216421652166216721682169217021712172217321742175217621772178217921802181218221832184218521862187218821892190219121922193219421952196219721982199220022012202220322042205220622072208220922102211221222132214221522162217221822192220222122222223222422252226222722282229223022312232223322342235223622372238223922402241224222432244224522462247224822492250225122522253225422552256225722582259226022612262226322642265226622672268226922702271227222732274227522762277227822792280228122822283228422852286228722882289229022912292229322942295229622972298229923002301230223032304230523062307230823092310231123122313231423152316231723182319232023212322232323242325232623272328232923302331233223332334233523362337233823392340234123422343234423452346234723482349235023512352235323542355235623572358235923602361236223632364236523662367236823692370237123722373237423752376237723782379238023812382238323842385238623872388238923902391239223932394239523962397239823992400240124022403240424052406240724082409241024112412241324142415241624172418241924202421242224232424242524262427242824292430243124322433243424352436243724382439244024412442244324442445244624472448244924502451245224532454245524562457245824592460246124622463246424652466246724682469247024712472247324742475247624772478247924802481248224832484248524862487248824892490249124922493249424952496249724982499250025012502250325042505250625072508250925102511251225132514251525162517251825192520252125222523252425252526252725282529253025312532253325342535253625372538253925402541254225432544254525462547254825492550255125522553255425552556255725582559256025612562256325642565256625672568256925702571257225732574257525762577257825792580258125822583258425852586258725882589259025912592259325942595259625972598259926002601260226032604260526062607260826092610261126122613261426152616261726182619262026212622262326242625262626272628262926302631263226332634263526362637263826392640264126422643264426452646264726482649265026512652265326542655265626572658265926602661266226632664266526662667266826692670267126722673267426752676267726782679268026812682268326842685268626872688268926902691269226932694269526962697269826992700270127022703270427052706270727082709271027112712271327142715271627172718271927202721272227232724272527262727272827292730273127322733273427352736273727382739274027412742274327442745274627472748274927502751275227532754275527562757275827592760276127622763276427652766276727682769277027712772277327742775277627772778277927802781278227832784278527862787278827892790279127922793279427952796279727982799280028012802280328042805280628072808280928102811281228132814281528162817281828192820282128222823282428252826282728282829283028312832283328342835283628372838283928402841284228432844284528462847284828492850285128522853285428552856285728582859286028612862286328642865286628672868286928702871287228732874287528762877287828792880288128822883288428852886288728882889289028912892289328942895289628972898289929002901290229032904290529062907290829092910291129122913291429152916291729182919292029212922292329242925292629272928292929302931293229332934293529362937293829392940294129422943294429452946294729482949295029512952295329542955295629572958295929602961296229632964296529662967296829692970297129722973297429752976297729782979298029812982298329842985298629872988298929902991299229932994299529962997299829993000300130023003300430053006300730083009301030113012301330143015301630173018301930203021302230233024302530263027302830293030303130323033303430353036303730383039304030413042304330443045304630473048304930503051305230533054305530563057305830593060306130623063306430653066306730683069307030713072307330743075307630773078307930803081308230833084308530863087308830893090309130923093309430953096309730983099310031013102310331043105310631073108310931103111311231133114311531163117311831193120312131223123312431253126312731283129313031313132313331343135313631373138313931403141314231433144314531463147314831493150315131523153315431553156315731583159316031613162316331643165316631673168316931703171317231733174317531763177317831793180318131823183318431853186318731883189319031913192319331943195319631973198319932003201320232033204320532063207320832093210321132123213321432153216321732183219322032213222322332243225322632273228322932303231323232333234323532363237323832393240324132423243324432453246324732483249325032513252325332543255325632573258325932603261326232633264326532663267326832693270327132723273327432753276327732783279328032813282328332843285328632873288328932903291329232933294329532963297329832993300330133023303330433053306330733083309331033113312331333143315331633173318331933203321332233233324332533263327332833293330333133323333333433353336333733383339334033413342334333443345334633473348334933503351335233533354335533563357335833593360336133623363336433653366336733683369337033713372337333743375337633773378337933803381338233833384338533863387338833893390339133923393339433953396339733983399340034013402340334043405340634073408340934103411341234133414341534163417341834193420342134223423342434253426342734283429343034313432343334343435343634373438343934403441344234433444344534463447344834493450345134523453345434553456345734583459346034613462346334643465346634673468346934703471347234733474347534763477347834793480348134823483348434853486348734883489349034913492349334943495349634973498349935003501350235033504350535063507350835093510351135123513351435153516351735183519352035213522352335243525352635273528352935303531353235333534353535363537353835393540354135423543354435453546354735483549355035513552355335543555355635573558355935603561356235633564356535663567356835693570357135723573357435753576357735783579358035813582358335843585358635873588358935903591359235933594359535963597359835993600360136023603360436053606360736083609361036113612361336143615361636173618361936203621362236233624362536263627362836293630363136323633363436353636363736383639364036413642364336443645364636473648364936503651365236533654365536563657365836593660366136623663366436653666366736683669367036713672367336743675367636773678367936803681368236833684368536863687368836893690369136923693369436953696369736983699370037013702370337043705370637073708370937103711371237133714371537163717371837193720372137223723372437253726372737283729373037313732373337343735373637373738373937403741374237433744374537463747374837493750375137523753375437553756375737583759376037613762376337643765376637673768376937703771377237733774377537763777377837793780378137823783378437853786378737883789379037913792379337943795379637973798379938003801380238033804380538063807380838093810381138123813381438153816381738183819382038213822382338243825382638273828382938303831383238333834383538363837383838393840384138423843384438453846384738483849385038513852385338543855385638573858385938603861386238633864386538663867386838693870387138723873387438753876387738783879388038813882388338843885388638873888388938903891389238933894389538963897389838993900390139023903390439053906390739083909391039113912391339143915391639173918391939203921392239233924392539263927392839293930393139323933393439353936393739383939394039413942394339443945394639473948394939503951395239533954395539563957395839593960396139623963396439653966396739683969397039713972397339743975397639773978397939803981398239833984398539863987398839893990399139923993399439953996399739983999400040014002400340044005400640074008400940104011401240134014401540164017401840194020402140224023402440254026402740284029403040314032403340344035403640374038403940404041404240434044404540464047404840494050405140524053405440554056405740584059406040614062406340644065406640674068406940704071407240734074407540764077407840794080408140824083408440854086408740884089409040914092409340944095409640974098409941004101410241034104410541064107410841094110411141124113411441154116411741184119412041214122412341244125412641274128412941304131413241334134413541364137413841394140414141424143414441454146414741484149415041514152415341544155415641574158415941604161416241634164416541664167416841694170417141724173417441754176417741784179418041814182418341844185418641874188418941904191419241934194419541964197419841994200420142024203420442054206420742084209421042114212421342144215421642174218421942204221422242234224422542264227422842294230423142324233423442354236423742384239424042414242424342444245424642474248424942504251425242534254425542564257425842594260426142624263426442654266426742684269427042714272427342744275427642774278427942804281428242834284428542864287428842894290429142924293429442954296429742984299430043014302430343044305430643074308430943104311431243134314431543164317431843194320432143224323432443254326432743284329433043314332433343344335433643374338433943404341434243434344434543464347434843494350435143524353435443554356435743584359436043614362436343644365436643674368436943704371437243734374437543764377437843794380438143824383438443854386438743884389439043914392439343944395439643974398439944004401440244034404440544064407440844094410441144124413441444154416441744184419442044214422442344244425442644274428442944304431443244334434443544364437443844394440444144424443444444454446444744484449445044514452445344544455445644574458445944604461446244634464446544664467446844694470447144724473447444754476447744784479448044814482448344844485448644874488448944904491449244934494449544964497449844994500450145024503450445054506450745084509451045114512451345144515451645174518451945204521452245234524452545264527452845294530453145324533453445354536453745384539454045414542454345444545454645474548454945504551455245534554455545564557455845594560456145624563456445654566456745684569457045714572457345744575457645774578457945804581458245834584458545864587458845894590459145924593459445954596459745984599460046014602460346044605460646074608460946104611461246134614461546164617461846194620462146224623462446254626462746284629463046314632463346344635463646374638463946404641464246434644464546464647464846494650465146524653465446554656465746584659466046614662466346644665466646674668466946704671467246734674467546764677467846794680468146824683468446854686468746884689469046914692469346944695469646974698469947004701470247034704470547064707470847094710471147124713471447154716471747184719472047214722472347244725472647274728472947304731473247334734473547364737473847394740474147424743474447454746474747484749475047514752475347544755475647574758475947604761476247634764476547664767476847694770477147724773477447754776477747784779478047814782478347844785478647874788478947904791479247934794479547964797479847994800480148024803480448054806480748084809481048114812481348144815481648174818481948204821482248234824482548264827482848294830483148324833483448354836483748384839484048414842484348444845484648474848484948504851485248534854485548564857485848594860486148624863486448654866486748684869487048714872487348744875487648774878487948804881488248834884488548864887488848894890489148924893489448954896489748984899490049014902490349044905490649074908490949104911491249134914491549164917491849194920492149224923492449254926492749284929493049314932493349344935493649374938493949404941494249434944494549464947494849494950495149524953495449554956495749584959496049614962496349644965496649674968496949704971497249734974497549764977497849794980498149824983498449854986498749884989499049914992499349944995499649974998499950005001500250035004500550065007500850095010501150125013501450155016501750185019502050215022502350245025502650275028502950305031503250335034503550365037503850395040504150425043504450455046504750485049505050515052505350545055505650575058505950605061506250635064506550665067506850695070507150725073507450755076507750785079508050815082508350845085508650875088508950905091509250935094509550965097509850995100510151025103510451055106510751085109511051115112511351145115511651175118511951205121512251235124512551265127512851295130513151325133513451355136513751385139514051415142514351445145514651475148514951505151515251535154515551565157515851595160516151625163516451655166516751685169517051715172517351745175517651775178517951805181518251835184518551865187518851895190519151925193519451955196519751985199520052015202520352045205520652075208520952105211521252135214521552165217521852195220522152225223522452255226522752285229523052315232523352345235523652375238523952405241524252435244524552465247524852495250525152525253525452555256525752585259526052615262526352645265526652675268526952705271527252735274527552765277527852795280528152825283528452855286528752885289529052915292529352945295529652975298529953005301530253035304530553065307530853095310531153125313531453155316531753185319532053215322532353245325532653275328532953305331533253335334533553365337533853395340534153425343534453455346534753485349535053515352535353545355535653575358535953605361536253635364536553665367536853695370537153725373537453755376537753785379538053815382538353845385538653875388538953905391539253935394539553965397539853995400540154025403540454055406540754085409541054115412541354145415541654175418541954205421542254235424542554265427542854295430543154325433543454355436543754385439544054415442544354445445544654475448544954505451545254535454545554565457545854595460546154625463546454655466546754685469547054715472547354745475547654775478547954805481548254835484548554865487548854895490549154925493549454955496549754985499550055015502550355045505550655075508550955105511551255135514551555165517551855195520552155225523552455255526552755285529553055315532553355345535553655375538553955405541554255435544554555465547554855495550555155525553555455555556555755585559556055615562556355645565556655675568556955705571557255735574557555765577557855795580558155825583558455855586558755885589559055915592559355945595559655975598559956005601560256035604560556065607560856095610561156125613561456155616561756185619562056215622562356245625562656275628562956305631563256335634563556365637563856395640564156425643564456455646564756485649565056515652565356545655565656575658565956605661566256635664566556665667566856695670567156725673567456755676567756785679568056815682568356845685568656875688568956905691569256935694569556965697569856995700570157025703570457055706570757085709571057115712571357145715571657175718571957205721572257235724572557265727572857295730573157325733573457355736573757385739574057415742574357445745574657475748574957505751575257535754575557565757575857595760576157625763576457655766576757685769577057715772577357745775577657775778577957805781578257835784578557865787578857895790579157925793579457955796579757985799580058015802580358045805580658075808580958105811581258135814581558165817581858195820582158225823582458255826582758285829583058315832583358345835583658375838583958405841584258435844584558465847584858495850585158525853585458555856585758585859586058615862586358645865586658675868586958705871587258735874587558765877587858795880588158825883588458855886588758885889589058915892589358945895589658975898589959005901590259035904590559065907590859095910591159125913591459155916591759185919592059215922592359245925592659275928592959305931593259335934593559365937593859395940594159425943594459455946594759485949595059515952595359545955595659575958595959605961596259635964596559665967596859695970597159725973597459755976597759785979598059815982598359845985598659875988598959905991599259935994599559965997599859996000600160026003600460056006600760086009601060116012601360146015601660176018601960206021602260236024602560266027602860296030603160326033603460356036603760386039604060416042604360446045604660476048604960506051605260536054605560566057605860596060606160626063606460656066606760686069607060716072607360746075607660776078607960806081608260836084608560866087608860896090609160926093609460956096609760986099610061016102610361046105610661076108610961106111611261136114611561166117611861196120612161226123612461256126612761286129613061316132613361346135613661376138613961406141614261436144614561466147614861496150615161526153615461556156615761586159616061616162616361646165616661676168616961706171617261736174617561766177617861796180618161826183618461856186618761886189619061916192619361946195619661976198619962006201620262036204620562066207620862096210621162126213621462156216621762186219622062216222622362246225622662276228622962306231623262336234623562366237623862396240624162426243624462456246624762486249625062516252625362546255625662576258625962606261626262636264626562666267626862696270627162726273627462756276627762786279628062816282628362846285628662876288628962906291629262936294629562966297629862996300630163026303630463056306630763086309631063116312631363146315631663176318631963206321632263236324632563266327632863296330633163326333633463356336633763386339634063416342634363446345634663476348634963506351635263536354635563566357635863596360636163626363636463656366636763686369637063716372637363746375637663776378637963806381638263836384638563866387638863896390639163926393639463956396639763986399640064016402640364046405640664076408640964106411641264136414641564166417641864196420642164226423642464256426642764286429643064316432643364346435643664376438643964406441644264436444644564466447644864496450645164526453645464556456645764586459646064616462646364646465646664676468646964706471647264736474647564766477647864796480648164826483648464856486648764886489649064916492649364946495649664976498649965006501650265036504650565066507650865096510651165126513651465156516651765186519652065216522652365246525652665276528652965306531653265336534653565366537653865396540654165426543654465456546654765486549655065516552655365546555655665576558655965606561656265636564656565666567656865696570657165726573657465756576657765786579658065816582658365846585658665876588658965906591659265936594659565966597659865996600660166026603660466056606660766086609661066116612661366146615661666176618661966206621662266236624662566266627662866296630663166326633663466356636663766386639664066416642664366446645664666476648664966506651665266536654665566566657665866596660666166626663666466656666666766686669667066716672667366746675667666776678667966806681668266836684668566866687668866896690669166926693669466956696669766986699670067016702670367046705670667076708670967106711671267136714671567166717671867196720672167226723672467256726672767286729673067316732673367346735673667376738673967406741674267436744674567466747674867496750675167526753675467556756675767586759676067616762676367646765676667676768676967706771677267736774677567766777677867796780678167826783678467856786678767886789679067916792679367946795679667976798679968006801680268036804680568066807680868096810681168126813681468156816681768186819682068216822682368246825682668276828682968306831683268336834683568366837683868396840
  1. # coding: utf-8
  2. # Copyright 2020-2021 Huawei Technologies Co., Ltd
  3. #
  4. # Licensed under the Apache License, Version 2.0 (the "License");
  5. # you may not use this file except in compliance with the License.
  6. # You may obtain a copy of the License at
  7. #
  8. # http://www.apache.org/licenses/LICENSE-2.0
  9. #
  10. # Unless required by applicable law or agreed to in writing, software
  11. # distributed under the License is distributed on an "AS IS" BASIS,
  12. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  13. # See the License for the specific language governing permissions and
  14. # limitations under the License.
  15. # ============================================================================
  16. """Operators for array."""
  17. import copy
  18. import functools
  19. import itertools
  20. import numbers
  21. from collections import Counter
  22. import numpy as np
  23. from mindspore import log as logger
  24. from mindspore.common.initializer import Zero
  25. from .. import signature as sig
  26. from .._utils import get_broadcast_shape, is_shape_unknown
  27. from .._utils import get_concat_offset
  28. from ..operations.math_ops import _infer_shape_reduce
  29. from ..primitive import Primitive, PrimitiveWithInfer, PrimitiveWithCheck, prim_attr_register, _run_op
  30. from ..._checkparam import Rel
  31. from ..._checkparam import Validator as validator
  32. from ..._checkparam import _check_3d_int_or_tuple
  33. from ...common import dtype as mstype
  34. from ...common._decorator import deprecated
  35. from ...common.parameter import Parameter
  36. from ...common.tensor import Tensor
  37. from ..._c_expression import Tensor as Tensor_
  38. class _ScatterOp(PrimitiveWithInfer):
  39. """
  40. Defines Scatter operators
  41. """
  42. __mindspore_signature__ = (
  43. sig.make_sig('x', sig.sig_rw.RW_WRITE, dtype=sig.sig_dtype.T),
  44. sig.make_sig('indices', dtype=sig.sig_dtype.T1),
  45. sig.make_sig('updates', dtype=sig.sig_dtype.T)
  46. )
  47. def _check_scatter_shape(self, x_shape, indices_shape, updates_shape, prim_name):
  48. if indices_shape != [-1] and updates_shape and updates_shape != indices_shape + x_shape[1:]:
  49. raise ValueError(f"For '{prim_name}', "
  50. f"updates_shape = indices_shape + x_shape[1:], but got x_shape: {x_shape}, "
  51. f"indices_shape: {indices_shape}, updates_shape: {updates_shape}.")
  52. @prim_attr_register
  53. def __init__(self, use_locking=False):
  54. """Initialize _ScatterOp"""
  55. validator.check_value_type('use_locking', use_locking, [bool], self.name)
  56. self.init_prim_io_names(inputs=['x', 'indices', 'updates'], outputs=['y'])
  57. self.add_prim_attr('side_effect_mem', True)
  58. def infer_shape(self, x_shape, indices_shape, updates_shape):
  59. self._check_scatter_shape(x_shape, indices_shape, updates_shape, self.name)
  60. return x_shape
  61. def infer_dtype(self, x_dtype, indices_dtype, updates_dtype):
  62. validator.check_tensor_dtype_valid('indices', indices_dtype, [mstype.int32], self.name)
  63. args = {"x": x_dtype, "updates": updates_dtype}
  64. validator.check_tensors_dtypes_same_and_valid(args, mstype.number_type, self.name)
  65. return x_dtype
  66. class _ScatterOpDynamic(PrimitiveWithCheck):
  67. """
  68. Defines Scatter operators with dynamic shape
  69. """
  70. __mindspore_signature__ = (
  71. sig.make_sig('x', sig.sig_rw.RW_WRITE, dtype=sig.sig_dtype.T),
  72. sig.make_sig('indices', dtype=sig.sig_dtype.T1),
  73. sig.make_sig('updates', dtype=sig.sig_dtype.T)
  74. )
  75. def _check_scatter_shape(self, x_shape, indices_shape, updates_shape, prim_name):
  76. # x_shape cannot be dynamic
  77. if np.any(np.array(x_shape) == -1):
  78. raise ValueError(f"For '{prim_name}', the 'input_x' does not support dynamic shape, "
  79. f"but got the shape of 'input_x' is {x_shape}.")
  80. # support indices and updates dynamic
  81. if np.any(np.array(indices_shape) == -1) or np.any(np.array(updates_shape) == -1):
  82. pass
  83. elif indices_shape != [-1] and updates_shape and updates_shape != indices_shape + x_shape[1:]:
  84. raise ValueError(f"For '{prim_name}', "
  85. f"updates_shape = indices_shape + x_shape[1:], but got x_shape: {x_shape}, "
  86. f"indices_shape: {indices_shape}, updates_shape: {updates_shape}.")
  87. @prim_attr_register
  88. def __init__(self, use_locking=False):
  89. """Initialize _ScatterOpDynamic"""
  90. validator.check_value_type('use_locking', use_locking, [bool], self.name)
  91. self.init_prim_io_names(inputs=['x', 'indices', 'updates'], outputs=['y'])
  92. self.add_prim_attr('side_effect_mem', True)
  93. def check_shape(self, x_shape, indices_shape, updates_shape):
  94. self._check_scatter_shape(x_shape, indices_shape, updates_shape, self.name)
  95. def check_dtype(self, x_dtype, indices_dtype, updates_dtype):
  96. validator.check_tensor_dtype_valid('indices', indices_dtype, [mstype.int32], self.name)
  97. args = {"x": x_dtype, "updates": updates_dtype}
  98. validator.check_tensors_dtypes_same_and_valid(args, mstype.number_type, self.name)
  99. class _ScatterNdOp(_ScatterOp):
  100. """
  101. Defines _ScatterNd operators
  102. """
  103. def _check_scatter_shape(self, x_shape, indices_shape, updates_shape, prim_name):
  104. validator.check('the dimension of x', len(x_shape),
  105. 'the dimension of indices', indices_shape[-1], Rel.GE)
  106. if indices_shape[:-1] + x_shape[indices_shape[-1]:] != updates_shape:
  107. raise ValueError(f"For '{prim_name}', updates_shape = "
  108. f"indices_shape[:-1] + x_shape[indices_shape[-1]:], but got x_shape: {x_shape}, "
  109. f"indices_shape: {indices_shape}, updates_shape: {updates_shape}.")
  110. def _check_infer_attr_reduce(axis, keep_dims, prim_name):
  111. validator.check_value_type('keep_dims', keep_dims, [bool], prim_name)
  112. validator.check_value_type('axis', axis, [int, tuple], prim_name)
  113. if isinstance(axis, tuple):
  114. for index, value in enumerate(axis):
  115. validator.check_value_type('axis[%d]' % index, value, [int], prim_name)
  116. class ExpandDims(PrimitiveWithInfer):
  117. """
  118. Adds an additional dimension to `input_x` at the given axis.
  119. Note:
  120. If the specified axis is a negative number, the index is counted
  121. backward from the end and starts at 1.
  122. Inputs:
  123. - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  124. - **axis** (int) - Specifies the dimension index at which to expand
  125. the shape of `input_x`. The value of axis must be in the range
  126. `[-input_x.ndim-1, input_x.ndim]`. Only constant value is allowed.
  127. Outputs:
  128. Tensor, the shape of tensor is :math:`(1, x_1, x_2, ..., x_R)` if the
  129. value of `axis` is 0. It has the same data type as `input_x`.
  130. Raises:
  131. ValueError: If `axis` is not an int or not in the valid range.
  132. Supported Platforms:
  133. ``Ascend`` ``GPU`` ``CPU``
  134. Examples:
  135. >>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
  136. >>> expand_dims = ops.ExpandDims()
  137. >>> output = expand_dims(input_tensor, 0)
  138. >>> print(output)
  139. [[[2. 2.]
  140. [2. 2.]]]
  141. """
  142. @prim_attr_register
  143. def __init__(self):
  144. """Initialize ExpandDims"""
  145. self.init_prim_io_names(inputs=['x', 'axis'], outputs=['output'])
  146. def __infer__(self, x, axis):
  147. validator.check_subclass("x", x['dtype'], mstype.tensor, self.name)
  148. x_shape = list(x['shape'])
  149. axis_v = axis['value']
  150. rank = len(x_shape)
  151. validator.check_int_range(axis_v, -rank - 1, rank, Rel.INC_BOTH, 'axis', self.name)
  152. value = None
  153. if x['value'] is not None:
  154. value = x['value'].asnumpy()
  155. value = np.expand_dims(value, axis_v)
  156. value = Tensor(value)
  157. if axis_v < 0:
  158. axis_v = rank + 1 + axis_v
  159. x_shape.insert(axis_v, 1)
  160. out = {'shape': x_shape,
  161. 'dtype': x['dtype'],
  162. 'value': value}
  163. if 'min_shape' in x and 'max_shape' in x:
  164. out['min_shape'] = x['min_shape']
  165. out['min_shape'].insert(axis_v, 1)
  166. out['max_shape'] = x['max_shape']
  167. out['max_shape'].insert(axis_v, 1)
  168. return out
  169. class DType(Primitive):
  170. """
  171. Returns the data type of the input tensor as mindspore.dtype.
  172. Inputs:
  173. - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  174. Outputs:
  175. mindspore.dtype, the data type of a tensor.
  176. Raises:
  177. TypeError: If `input_x` is not a Tensor.
  178. Supported Platforms:
  179. ``Ascend`` ``GPU`` ``CPU``
  180. Examples:
  181. >>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
  182. >>> output = ops.DType()(input_tensor)
  183. >>> print(output)
  184. Float32
  185. """
  186. @prim_attr_register
  187. def __init__(self):
  188. """Initialize DType"""
  189. class SameTypeShape(PrimitiveWithInfer):
  190. """
  191. Checks whether the data type and shape of two tensors are the same.
  192. Inputs:
  193. - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  194. - **input_y** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_S)`.
  195. Outputs:
  196. Tensor, the shape of tensor is :math:`(x_1, x_2, ..., x_R)`,
  197. if data type and shape of `input_x` and `input_y` are the same.
  198. Raises:
  199. TypeError: If the data types of `input_x` and `input_y` are not the same.
  200. ValueError: If the shapes of `input_x` and `input_y` are not the same.
  201. Supported Platforms:
  202. ``Ascend`` ``GPU`` ``CPU``
  203. Examples:
  204. >>> input_x = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
  205. >>> input_y = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
  206. >>> output = ops.SameTypeShape()(input_x, input_y)
  207. >>> print(output)
  208. [[2. 2.]
  209. [2. 2.]]
  210. """
  211. @prim_attr_register
  212. def __init__(self):
  213. """Initialize Same"""
  214. def __call__(self, x, y):
  215. """run in PyNative mode"""
  216. validator.check_value_type('x', x, Tensor, self.name)
  217. validator.check_value_type('y', y, Tensor, self.name)
  218. validator.check('x dtype', x.dtype, 'y dtype', y.dtype, Rel.EQ, self.name, TypeError)
  219. validator.check('x shape', x.shape, 'y shape', y.shape, Rel.EQ, self.name)
  220. return x
  221. def __infer__(self, x, y):
  222. validator.check_subclass('x', x['dtype'], mstype.tensor, self.name)
  223. validator.check_subclass('y', y['dtype'], mstype.tensor, self.name)
  224. validator.check('x dtype', x['dtype'], 'y dtype', y['dtype'], Rel.EQ, self.name, TypeError)
  225. validator.check('x shape', x['shape'], 'y shape', y['shape'], Rel.EQ, self.name)
  226. return x
  227. class Cast(PrimitiveWithInfer):
  228. """
  229. Returns a tensor with the new specified data type.
  230. Inputs:
  231. - **input_x** (Union[Tensor, Number]) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  232. The tensor to be cast.
  233. - **type** (dtype.Number) - The valid data type of the output tensor. Only constant value is allowed.
  234. Outputs:
  235. Tensor, the shape of tensor is the same as `input_x`, :math:`(x_1, x_2, ..., x_R)`.
  236. Raises:
  237. TypeError: If `input_x` is neither Tensor nor Number.
  238. TypeError: If `type` is not a Number.
  239. Supported Platforms:
  240. ``Ascend`` ``GPU`` ``CPU``
  241. Examples:
  242. >>> input_np = np.random.randn(2, 3, 4, 5).astype(np.float32)
  243. >>> input_x = Tensor(input_np)
  244. >>> type_dst = mindspore.int32
  245. >>> cast = ops.Cast()
  246. >>> output = cast(input_x, type_dst)
  247. >>> print(output.dtype)
  248. Int32
  249. >>> print(output.shape)
  250. (2, 3, 4, 5)
  251. """
  252. @prim_attr_register
  253. def __init__(self):
  254. # if primitive need setattr in __infer__ need add this flag
  255. """Initialize Cast"""
  256. self.init_prim_io_names(inputs=['x', 'dst_type'], outputs=['output'])
  257. def check_elim(self, x, dtype):
  258. if isinstance(x, (Tensor, numbers.Number, Parameter)):
  259. if isinstance(x, Parameter):
  260. data = x.data
  261. if data.dtype == dtype:
  262. return (True, x)
  263. if isinstance(x, Tensor) and x.dtype == dtype:
  264. x = Tensor(x)
  265. x.set_cast_dtype()
  266. return (True, x)
  267. if isinstance(x, numbers.Number):
  268. return (True, Tensor(x, dtype=dtype))
  269. return (False, None)
  270. def __infer__(self, x, t):
  271. src_type = x['dtype']
  272. dst_type = t['value']
  273. validator.check_subclass("input_x", src_type, [mstype.tensor, mstype.number], self.name)
  274. validator.check_subclass("type", dst_type, mstype.number, self.name)
  275. if isinstance(src_type, type(mstype.tensor)):
  276. src_type = x['dtype'].element_type()
  277. if isinstance(dst_type, type(mstype.tensor)):
  278. dst_type = dst_type.element_type()
  279. self.add_prim_attr('DstT', dst_type)
  280. self.add_prim_attr('SrcT', src_type)
  281. self.add_prim_attr('dst_type', dst_type)
  282. value = None
  283. if x['value'] is not None:
  284. np_dst_type = mstype.dtype_to_nptype(dst_type)
  285. if isinstance(x['value'], (int, float)):
  286. value = Tensor(np.array(x['value']).astype(np_dst_type))
  287. else:
  288. value = Tensor(x['value'].asnumpy().astype(np_dst_type))
  289. out = {'shape': x['shape'],
  290. 'dtype': mstype.tensor_type(t['value']),
  291. 'value': value}
  292. if 'min_shape' in x and 'max_shape' in x:
  293. out['min_shape'] = x['min_shape']
  294. out['max_shape'] = x['max_shape']
  295. return out
  296. class IsSubClass(PrimitiveWithInfer):
  297. """
  298. Checks whether this type is a sub-class of another type.
  299. Inputs:
  300. - **sub_type** (mindspore.dtype) - The type to be checked. Only constant value is allowed.
  301. - **type_** (mindspore.dtype) - The target type. Only constant value is allowed.
  302. Outputs:
  303. bool, the check result.
  304. Raises:
  305. TypeError: If `sub_type` or `type_` is not a Type.
  306. Supported Platforms:
  307. ``Ascend`` ``GPU`` ``CPU``
  308. Examples:
  309. >>> output = ops.IsSubClass()(mindspore.int32, mindspore.intc)
  310. >>> print(output)
  311. True
  312. """
  313. @prim_attr_register
  314. def __init__(self):
  315. pass
  316. def __infer__(self, sub_type, type_):
  317. sub_type_t = sub_type['value']
  318. type_v = type_['value']
  319. validator.check_value_type("sub_type", sub_type_t, [mstype.Type], self.name)
  320. validator.check_value_type("type_", type_v, [mstype.Type], self.name)
  321. value = mstype.issubclass_(sub_type_t, type_v)
  322. out = {'shape': (),
  323. 'dtype': mstype.type_type,
  324. 'value': value}
  325. return out
  326. class IsInstance(PrimitiveWithInfer):
  327. """
  328. Checks whether an object is an instance of a target type.
  329. Inputs:
  330. - **inst** (Any Object) - The instance to be checked. Only constant value is allowed.
  331. - **type_** (mindspore.dtype) - The target type. Only constant value is allowed.
  332. Outputs:
  333. bool, the check result.
  334. Raises:
  335. TypeError: If `type_` is not a Type.
  336. Supported Platforms:
  337. ``Ascend`` ``GPU`` ``CPU``
  338. Examples:
  339. >>> inst = 1
  340. >>> output = ops.IsInstance()(inst, mindspore.int32)
  341. >>> print(output)
  342. False
  343. """
  344. @prim_attr_register
  345. def __init__(self):
  346. pass
  347. def __infer__(self, inst, type_):
  348. sub_type_t = inst['dtype']
  349. type_v = type_['value']
  350. validator.check_value_type("type_", type_v, [mstype.Type], self.name)
  351. if type_v == mstype.list_:
  352. value = isinstance(sub_type_t, list)
  353. elif type_v == mstype.tuple_:
  354. value = isinstance(sub_type_t, tuple)
  355. else:
  356. value = mstype.issubclass_(sub_type_t, type_v)
  357. out = {'shape': (),
  358. 'dtype': mstype.type_type,
  359. 'value': value}
  360. return out
  361. class Reshape(PrimitiveWithInfer):
  362. """
  363. Reshapes the input tensor with the same values based on a given shape tuple.
  364. The 'input_shape' can only have one -1 at most, in which case it’s inferred from the remaining dimensions and
  365. the number of elements in the input.
  366. Inputs:
  367. - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  368. - **input_shape** (tuple[int]) - The input tuple is constructed by multiple
  369. integers, i.e., :math:`(y_1, y_2, ..., y_S)`. Only constant value is allowed.
  370. Outputs:
  371. Tensor, the shape of tensor is :math:`(y_1, y_2, ..., y_S)`.
  372. Raises:
  373. ValueError: Given a shape tuple, if it has several -1; or if the product
  374. of its elements is less than or equal to 0 or cannot be divided by the product
  375. of the input tensor shape; or if it does not match the input's array size.
  376. Supported Platforms:
  377. ``Ascend`` ``GPU`` ``CPU``
  378. Examples:
  379. >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
  380. >>> reshape = ops.Reshape()
  381. >>> output = reshape(input_x, (3, 2))
  382. >>> print(output)
  383. [[-0.1 0.3]
  384. [ 3.6 0.4]
  385. [ 0.5 -3.2]]
  386. """
  387. @prim_attr_register
  388. def __init__(self):
  389. """Initialize Reshape"""
  390. self.init_prim_io_names(inputs=['tensor', 'shape'], outputs=['output'])
  391. def _get_shape_and_range(self, x, shape):
  392. """ get min and max shape when output shape is dynamic"""
  393. min_shape = None
  394. max_shape = None
  395. x_shp = x['shape']
  396. if is_shape_unknown(shape['shape']):
  397. out_shape = [-2]
  398. return out_shape, min_shape, max_shape
  399. shape_rank = shape['shape'][0]
  400. if not x_shp:
  401. # x is a scalar, output shape fixed
  402. out_shape = [1] * shape_rank
  403. return out_shape, min_shape, max_shape
  404. out_shape = [-1] * shape_rank
  405. if "max_value" in shape and "min_value" in shape:
  406. min_shape = shape["min_value"]
  407. max_shape = shape["max_value"]
  408. if len(min_shape) != shape_rank or len(max_shape) != shape_rank:
  409. raise RuntimeError("The primitive[Reshape]'s input[shape] min or max value not math the shape rank.")
  410. for i in range(shape_rank):
  411. if min_shape[i] == max_shape[i]:
  412. out_shape[i] = min_shape[i]
  413. elif is_shape_unknown(x_shp) and "max_shape" in x:
  414. # when dynamic memory allocation is supported, max_shape can be left out
  415. min_shape = [1] * shape_rank
  416. max_shape = [int(np.prod(x["max_shape"]))] * shape_rank
  417. return out_shape, min_shape, max_shape
  418. def __infer__(self, x, shape):
  419. shape_v = shape['value']
  420. x_shp = x['shape']
  421. validator.check_subclass("x", x['dtype'], mstype.tensor, self.name)
  422. # for shape is not constant
  423. if shape_v is None:
  424. out_shape, min_shape, max_shape = self._get_shape_and_range(x, shape)
  425. if is_shape_unknown(out_shape):
  426. # `min_shape` and `max_shape` can't be None before dynamic memory allocation is supported
  427. shape_shp = shape['shape']
  428. shape_rank = 1 if is_shape_unknown(shape_shp) else shape_shp[0]
  429. min_shape = [1] * shape_rank if min_shape is None else min_shape
  430. max_shape = [1] * shape_rank if max_shape is None else max_shape
  431. return {
  432. 'shape': out_shape,
  433. 'dtype': x['dtype'],
  434. 'value': None,
  435. 'max_shape': max_shape,
  436. 'min_shape': min_shape
  437. }
  438. if isinstance(shape_v, Tensor_):
  439. validator.check_tensor_dtype_valid("shape", shape['dtype'], [mstype.int64], self.name)
  440. shape_v = shape_v.asnumpy().tolist()
  441. else:
  442. validator.check_value_type("shape", shape_v, [tuple], self.name)
  443. shape_v = list(shape_v)
  444. neg_index = -1
  445. dim_prod = 1
  446. for i, shp_i in enumerate(shape_v):
  447. validator.check_value_type("shape[%d]" % i, shp_i, [int], self.name)
  448. if shp_i == -1:
  449. if neg_index != -1:
  450. raise ValueError(f"For '{self.name}', there can be at most one '-1' in 'input_shape', "
  451. f"but got {shape_v}.")
  452. neg_index = i
  453. else:
  454. dim_prod *= shp_i
  455. if is_shape_unknown(x_shp):
  456. if 'max_shape' in x:
  457. x_max_shape = x['max_shape']
  458. else:
  459. x_max_shape = x['shape']
  460. if 'min_shape' in x:
  461. x_min_shape = x['min_shape']
  462. else:
  463. x_min_shape = x['shape']
  464. max_arr_prod = np.prod(x_max_shape)
  465. min_arr_prod = np.prod(x_min_shape)
  466. max_shape = list(shape_v)
  467. min_shape = list(shape_v)
  468. if neg_index != -1:
  469. max_shape[neg_index] = int(max_arr_prod / dim_prod)
  470. min_shape[neg_index] = int(min_arr_prod / dim_prod)
  471. out = {'shape': shape_v,
  472. 'dtype': x['dtype'],
  473. 'value': None,
  474. 'max_shape': tuple(max_shape),
  475. 'min_shape': tuple(min_shape)}
  476. else:
  477. arr_prod = np.prod(x_shp)
  478. if dim_prod <= 0:
  479. raise ValueError(f"For '{self.name}', the shape of 'input_x' is {x_shp}, "
  480. f"the value of 'input_shape' is {shape_v}. "
  481. f"The product of 'input_shape' should > 0, but got {dim_prod}.")
  482. if neg_index != -1:
  483. shape_v[neg_index] = int(arr_prod / dim_prod)
  484. dim_prod *= shape_v[neg_index]
  485. if dim_prod != arr_prod:
  486. raise ValueError(f"For '{self.name}', the shape of 'input_x' is {x_shp}, "
  487. f"the value of 'input_shape' value is {shape_v}. "
  488. f"The product of the shape of 'input_x' should be equal to product of 'input_shape', "
  489. f"but product of the shape of 'input_x' is {arr_prod}, "
  490. f"product of 'input_shape' is {dim_prod}.")
  491. value = None
  492. if x['value'] is not None:
  493. value = Tensor(x['value'].asnumpy().reshape(shape_v))
  494. out = {'shape': tuple(shape_v),
  495. 'dtype': x['dtype'],
  496. 'value': value}
  497. return out
  498. class Shape(Primitive):
  499. """
  500. Returns the shape of the input tensor. And it used to be static shape.
  501. static shape: A shape that can be obtained without running the graph. It is an inherent property of tensor and
  502. may be unknown. The static shape information can be completed by artificial setting.
  503. No matter what the input of the graph is, the static shape is not affected.
  504. Inputs:
  505. - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  506. Outputs:
  507. tuple[int], the output tuple is constructed by multiple integers,
  508. :math:`(x_1, x_2, ..., x_R)`.
  509. Raises:
  510. TypeError: If `input_x` is not a Tensor.
  511. Supported Platforms:
  512. ``Ascend`` ``GPU`` ``CPU``
  513. Examples:
  514. >>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
  515. >>> shape = ops.Shape()
  516. >>> output = shape(input_x)
  517. >>> print(output)
  518. (3, 2, 1)
  519. """
  520. @prim_attr_register
  521. def __init__(self):
  522. """Initialize Shape"""
  523. class DynamicShape(Primitive):
  524. """
  525. Returns the shape of the input tensor. And it used to be dynamic shape.
  526. Note:
  527. Dynamic shape: After the graph is running, as the tensor flows in the graph, the specific shape of the tensor
  528. on each node on the graph can be inferred according to the structure of the graph.
  529. This shape is called a dynamic shape. As the input shape of the graph is different,
  530. the dynamic shape of the tensor in the graph will change.
  531. Inputs:
  532. - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  533. Outputs:
  534. Tensor[int], 1-dim Tensor of type int32
  535. Raises:
  536. TypeError: If `input_x` is not a Tensor.
  537. Supported Platforms:
  538. ``Ascend`` ``GPU`` ``CPU``
  539. Examples:
  540. >>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
  541. >>> shape = ops.DynamicShape()
  542. >>> output = shape(input_x)
  543. >>> print(output)
  544. [3 2 1]
  545. """
  546. @prim_attr_register
  547. def __init__(self):
  548. """init Shape"""
  549. self.init_prim_io_names(inputs=['tensor'], outputs=['output'])
  550. self.add_prim_attr('is_dynamic_shape', True)
  551. class Squeeze(PrimitiveWithInfer):
  552. """
  553. Returns a tensor with the same data type but dimensions of 1 are removed based on `axis`.
  554. If `axis` is specified, it will remove the dimensions of size 1 in the given `axis`.
  555. It `axis` is None, it will remove all the dimensions of size 1.
  556. For example, if input is of shape: (A×1×B×C×1×D), then the out tensor will be of shape: (A×B×C×D);
  557. When dim is given, a squeeze operation is done only in the given dimension.
  558. If input is of shape: (A×1×B), squeeze(input, 0) leaves the tensor unchanged,
  559. but squeeze(input, 1) will squeeze the tensor to the shape (A×B).
  560. Please note that in dynamic graph mode, the output Tensor will share data with the input Tensor,
  561. and there is no Tensor data copy process.
  562. Note:
  563. The dimension index starts at 0 and must be in the range `[-input.ndim, input.ndim]`.
  564. Args:
  565. axis (Union[int, tuple(int)]): Specifies the dimension indexes of shape to be removed, which will remove
  566. all the dimensions that are equal to 1. If specified, it must be int32 or int64.
  567. Default: (), an empty tuple.
  568. Inputs:
  569. - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  570. Outputs:
  571. Tensor, the shape of tensor is :math:`(x_1, x_2, ..., x_S)`.
  572. Raises:
  573. TypeError: If `axis` is neither an int nor tuple.
  574. TypeError: If `axis` is a tuple whose elements are not all int.
  575. ValueError: If the corresponding dimension of the specified axis isn't equal to 1.
  576. Supported Platforms:
  577. ``Ascend`` ``GPU`` ``CPU``
  578. Examples:
  579. >>> input_x = Tensor(np.ones(shape=[3, 2, 1]), mindspore.float32)
  580. >>> squeeze = ops.Squeeze(2)
  581. >>> output = squeeze(input_x)
  582. >>> print(output)
  583. [[1. 1.]
  584. [1. 1.]
  585. [1. 1.]]
  586. """
  587. @prim_attr_register
  588. def __init__(self, axis=()):
  589. """Initialize Squeeze"""
  590. self.init_prim_io_names(inputs=['x'], outputs=['output'])
  591. validator.check_value_type('axis', axis, [int, tuple], self.name)
  592. if isinstance(axis, tuple):
  593. for idx, item in enumerate(axis):
  594. validator.check_value_type("axis[%d]" % idx, item, [int], self.name)
  595. else:
  596. self.axis = (axis,)
  597. self.add_prim_attr("axis", (axis,))
  598. def infer_shape(self, x_shape):
  599. axis = self.axis
  600. x_shape = list(x_shape)
  601. ndim = len(x_shape)
  602. if not axis:
  603. ret = [d for d in x_shape if d != 1]
  604. else:
  605. for a in axis:
  606. validator.check_int_range(a, -ndim, ndim - 1, Rel.INC_BOTH, 'axis or its elements', self.name)
  607. if x_shape[a] != 1:
  608. raise ValueError(f"For '{self.name}', the shape of 'input_x' at {a} dimension should be 1, "
  609. f"but got {x_shape[a]}.")
  610. ret = [x_shape[i] for i in range(ndim) if not (i in axis or (i - ndim) in axis)]
  611. return ret
  612. def infer_dtype(self, x_dtype):
  613. validator.check_subclass("x", x_dtype, mstype.tensor, self.name)
  614. return x_dtype
  615. class Transpose(Primitive):
  616. """
  617. Permutes the dimensions of the input tensor according to input permutation.
  618. For a 1-D array this has no effect, as a transposed vector is simply the same vector.
  619. To convert a 1-D array into a 2D column vecto please refer the class: mindspore.ops.ExpandDims.
  620. For a 2-D array, this is a standard matrix transpose. For an n-D array, if axes are given,
  621. their order indicates how the axes are permuted (see Examples).
  622. If axes are not provided and a.shape = (i[0], i[1], ... i[n-2], i[n-1]),
  623. then a.transpose().shape = (i[n-1], i[n-2], ... i[1], i[0]).
  624. Inputs:
  625. - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  626. - **input_perm** (tuple[int]) - The permutation to be converted. The elements in `input_perm` are composed of
  627. the indexes of each dimension of `input_x`. The length of `input_perm` and the shape of `input_x` must be
  628. the same. Only constant value is allowed. Must be in the range [0, rank(input_x)).
  629. Outputs:
  630. Tensor, the type of output tensor is the same as `input_x` and the shape of output tensor is decided by the
  631. shape of `input_x` and the value of `input_perm`.
  632. Raises:
  633. TypeError: If `input_perm` is not a tuple.
  634. ValueError: If length of shape of `input_x` is not equal to length of shape of `input_perm`.
  635. ValueError: If the same element exists in `input_perm`.
  636. Supported Platforms:
  637. ``Ascend`` ``GPU`` ``CPU``
  638. Examples:
  639. >>> input_x = Tensor(np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]), mindspore.float32)
  640. >>> input_perm = (0, 2, 1)
  641. >>> transpose = ops.Transpose()
  642. >>> output = transpose(input_x, input_perm)
  643. >>> print(output)
  644. [[[ 1. 4.]
  645. [ 2. 5.]
  646. [ 3. 6.]]
  647. [[ 7. 10.]
  648. [ 8. 11.]
  649. [ 9. 12.]]]
  650. """
  651. @prim_attr_register
  652. def __init__(self):
  653. """Initialize Transpose"""
  654. self.init_prim_io_names(inputs=['x', 'perm'], outputs=['output'])
  655. class Unique(Primitive):
  656. """
  657. Returns the unique elements of input tensor and also return a tensor containing the index of each value of input
  658. tensor corresponding to the output unique tensor.
  659. The output contains Tensor `y` and Tensor `idx`, the format is probably similar to (`y`, `idx`).
  660. The shape of Tensor `y` and Tensor `idx` is different in most cases, because Tensor `y` will be deduplicated,
  661. and the shape of Tensor `idx` is consistent with the input.
  662. To get the same shape between `idx` and `y`, please ref to 'UniqueWithPad' operator.
  663. Inputs:
  664. - **input_x** (Tensor) - The input tensor.
  665. The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
  666. Outputs:
  667. Tuple, containing Tensor objects (`y`, `idx`), `y` is a tensor with the
  668. same type as `input_x`, and contains the unique elements in `x`, sorted in
  669. ascending order. `idx` is a tensor containing indices of elements in
  670. the input corresponding to the output tensor.
  671. Raises:
  672. TypeError: If `input_x` is not a Tensor.
  673. Supported Platforms:
  674. ``Ascend`` ``GPU`` ``CPU``
  675. Examples:
  676. >>> input_x = Tensor(np.array([1, 2, 5, 2]), mindspore.int32)
  677. >>> output = ops.Unique()(input_x)
  678. >>> print(output)
  679. (Tensor(shape=[3], dtype=Int32, value= [1, 2, 5]), Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 1]))
  680. >>> y = output[0]
  681. >>> print(y)
  682. [1 2 5]
  683. >>> idx = output[1]
  684. >>> print(idx)
  685. [0 1 2 1]
  686. >>> # As can be seen from the above, y and idx shape
  687. >>> # note that for GPU, this operator must be wrapped inside a model, and executed in graph mode.
  688. >>> class UniqueNet(nn.Cell):
  689. ... def __init__(self):
  690. ... super(UniqueNet, self).__init__()
  691. ... self.unique_op = ops.Unique()
  692. ...
  693. ... def construct(self, x):
  694. ... output, indices = self.unique_op(x)
  695. ... return output, indices
  696. ...
  697. >>> input_x = Tensor(np.array([1, 2, 5, 2]), mindspore.int32)
  698. >>> net = UniqueNet()
  699. >>> output = net(input_x)
  700. >>> print(output)
  701. (Tensor(shape=[3], dtype=Int32, value= [1, 2, 5]), Tensor(shape=[4], dtype=Int32, value= [0, 1, 2, 1]))
  702. """
  703. @prim_attr_register
  704. def __init__(self):
  705. self.init_prim_io_names(inputs=['x'], outputs=['output'])
  706. class Gather(Primitive):
  707. r"""
  708. Returns a slice of the input tensor based on the specified indices and axis.
  709. Slices the input tensor base on the indices at specified axis. See the following example for more clear.
  710. Inputs:
  711. - **input_params** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  712. The original Tensor.
  713. - **input_indices** (Tensor) - The shape of tensor is :math:`(y_1, y_2, ..., y_S)`.
  714. Specifies the indices of elements of the original Tensor. Must be in the range
  715. `[0, input_param.shape[axis])` which are only validated on CPU. The data type can be int32 or int64.
  716. - **axis** (int) - Specifies the dimension index to gather indices.
  717. Outputs:
  718. Tensor, the shape of tensor is
  719. :math:`input\_params.shape[:axis] + input\_indices.shape + input\_params.shape[axis + 1:]`.
  720. Raises:
  721. TypeError: If `axis` is not an int.
  722. TypeError: If `input_indices` is not an int.
  723. Supported Platforms:
  724. ``Ascend`` ``GPU`` ``CPU``
  725. Examples:
  726. >>> input_params = Tensor(np.array([[1, 2, 7, 42], [3, 4, 54, 22], [2, 2, 55, 3]]), mindspore.float32)
  727. >>> input_indices = Tensor(np.array([1, 2]), mindspore.int32)
  728. >>> axis = 1
  729. >>> output = ops.Gather()(input_params, input_indices, axis)
  730. >>> print(output)
  731. [[ 2. 7.]
  732. [ 4. 54.]
  733. [ 2. 55.]]
  734. >>> axis = 0
  735. >>> output = ops.Gather()(input_params, input_indices, axis)
  736. >>> print(output)
  737. [[3. 4. 54. 22.]
  738. [2. 2. 55. 3.]]
  739. """
  740. @prim_attr_register
  741. def __init__(self):
  742. """Initialize Gather"""
  743. self.init_prim_io_names(inputs=['params', 'indices', 'axis'], outputs=['output'])
  744. class GatherV2(PrimitiveWithCheck):
  745. """
  746. Same as operator Gather. GatherV2 will be deprecated in the future.
  747. Please use Gather instead.
  748. """
  749. @deprecated("1.1", "Gather", True)
  750. @prim_attr_register
  751. def __init__(self):
  752. """Initialize GatherV2"""
  753. self.init_prim_io_names(inputs=['params', 'indices', 'axis'], outputs=['output'])
  754. def __check__(self, params, indices, axis):
  755. validator.check_subclass("params", params['dtype'], mstype.tensor, self.name)
  756. validator.check_tensor_dtype_valid("indices", indices['dtype'], mstype.int_type, self.name)
  757. validator.check_subclass("axis", axis['dtype'], [mstype.number], self.name)
  758. axis_v = axis['value']
  759. validator.check_value_type('axis', axis_v, [int], self.name)
  760. rank = len(params['shape'])
  761. validator.check_int_range(axis_v, -rank, rank, Rel.INC_LEFT, "axis", self.name)
  762. class SparseGatherV2(PrimitiveWithCheck):
  763. """
  764. Returns a slice of input tensor based on the specified indices and axis.
  765. Inputs:
  766. - **input_params** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  767. - **input_indices** (Tensor) - The shape of tensor is :math:`(y_1, y_2, ..., y_S)`.
  768. Specifies the indices of elements of the original Tensor, must be in the range
  769. `[0, input_param.shape[axis])`.
  770. - **axis** (int) - Specifies the dimension index to gather indices.
  771. Outputs:
  772. Tensor, the shape of tensor is :math:`(z_1, z_2, ..., z_N)`.
  773. Raises:
  774. TypeError: If `axis` is not an int.
  775. Supported Platforms:
  776. ``Ascend`` ``GPU``
  777. Examples:
  778. >>> input_params = Tensor(np.array([[1, 2, 7, 42], [3, 4, 54, 22], [2, 2, 55, 3]]), mindspore.float32)
  779. >>> input_indices = Tensor(np.array([1, 2]), mindspore.int32)
  780. >>> axis = 1
  781. >>> out = ops.SparseGatherV2()(input_params, input_indices, axis)
  782. >>> print(out)
  783. [[2. 7.]
  784. [4. 54.]
  785. [2. 55.]]
  786. """
  787. @prim_attr_register
  788. def __init__(self):
  789. """Initialize SparseGatherV2"""
  790. self.init_prim_io_names(inputs=['params', 'indices', 'axis'], outputs=['output'])
  791. def __check__(self, params, indices, axis):
  792. validator.check_subclass("params", params['dtype'], mstype.tensor, self.name)
  793. validator.check_tensor_dtype_valid("indices", indices['dtype'], mstype.int_type, self.name)
  794. validator.check_subclass("axis", axis['dtype'], [mstype.number], self.name)
  795. axis_v = axis['value']
  796. validator.check_value_type('axis', axis_v, [int], self.name)
  797. rank = len(params['shape'])
  798. validator.check_int_range(axis_v, -rank, rank, Rel.INC_LEFT, "axis", self.name)
  799. class Padding(PrimitiveWithInfer):
  800. """
  801. Extends the last dimension of the input tensor from 1 to pad_dim_size, by filling with 0.
  802. Args:
  803. pad_dim_size (int): The value of the last dimension of `x` to be extended, which must be positive. Default: 8.
  804. Inputs:
  805. - **x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`. The rank of `x` must be at least 2.
  806. The last dimension of `x` must be 1. The data type is Number.
  807. Outputs:
  808. Tensor, the shape of tensor is :math:`(z_1, z_2, ..., z_N)`.
  809. Raises:
  810. TypeError: If `pad_dim_size` is not an int.
  811. ValueError: If `pad_dim_size` is less than 1.
  812. ValueError: If last dim of `x` is not equal 1.
  813. Supported Platforms:
  814. ``Ascend``
  815. Examples:
  816. >>> x = Tensor(np.array([[8], [10]]), mindspore.float32)
  817. >>> pad_dim_size = 4
  818. >>> output = ops.Padding(pad_dim_size)(x)
  819. >>> print(output)
  820. [[ 8. 0. 0. 0.]
  821. [10. 0. 0. 0.]]
  822. """
  823. @prim_attr_register
  824. def __init__(self, pad_dim_size=8):
  825. """Initialize padding"""
  826. validator.check_value_type("pad_dim_size", pad_dim_size, [int], self.name)
  827. validator.check_positive_int(pad_dim_size, "pad_dim_size", self.name)
  828. self.pad_dim_size = pad_dim_size
  829. def __infer__(self, x):
  830. validator.check_subclass("x", x['dtype'], mstype.tensor, self.name)
  831. x_shape = list(x['shape'])
  832. validator.check_int(len(x_shape), 1, Rel.GT, "rank of x", self.name)
  833. validator.check_int(x_shape[-1], 1, Rel.EQ, "last dim of x", self.name)
  834. out_shape = x_shape
  835. out_shape[-1] = self.pad_dim_size
  836. out = {'shape': out_shape,
  837. 'dtype': x['dtype'],
  838. 'value': None}
  839. return out
  840. class UniqueWithPad(PrimitiveWithInfer):
  841. """
  842. Returns unique elements and relative indexes in 1-D tensor, filled with padding num.
  843. The basic function is the same as the Unique operator, but the UniqueWithPad operator adds a Pad function.
  844. The returned tuple(`y`, `idx`) after the input Tensor `x` is processed by the unique operator,
  845. in which the shapes of `y` and `idx` are mostly not equal. Therefore, in order to solve the above situation,
  846. the UniqueWithPad operator will fill the `y` Tensor with the `pad_num` specified by the user
  847. to make it have the same shape as the Tensor `idx`.
  848. Inputs:
  849. - **x** (Tensor) - The tensor need to be unique. Must be 1-D vector with types: int32, int64.
  850. - **pad_num** (int) - Pad num. The data type is an int.
  851. Outputs:
  852. tuple(Tensor), tuple of 2 tensors, `y` and `idx`.
  853. - y (Tensor) - The unique elements filled with pad_num, the shape and data type same as `x`.
  854. - idx (Tensor) - The index of each value of `x` in the unique output `y`, the shape and data type same as `x`.
  855. Raises:
  856. TypeError: If dtype of `x` is neither int32 nor int64.
  857. ValueError: If length of shape of `x` is not equal to 1.
  858. Supported Platforms:
  859. ``Ascend`` ``CPU``
  860. Examples:
  861. >>> x = Tensor(np.array([1, 1, 5, 5, 4, 4, 3, 3, 2, 2,]), mindspore.int32)
  862. >>> pad_num = 8
  863. >>> output = ops.UniqueWithPad()(x, pad_num)
  864. >>> print(output)
  865. (Tensor(shape=[10], dtype=Int32, value= [1, 5, 4, 3, 2, 8, 8, 8, 8, 8]),
  866. Tensor(shape=[10], dtype=Int32, value= [0, 0, 1, 1, 2, 2, 3, 3, 4, 4]))
  867. """
  868. @prim_attr_register
  869. def __init__(self):
  870. """init UniqueWithPad"""
  871. def __infer__(self, x, pad_num):
  872. validator.check_tensor_dtype_valid("x", x['dtype'], [mstype.int32, mstype.int64], self.name)
  873. validator.check_subclass("pad_num", pad_num['dtype'], [mstype.int32, mstype.int64], self.name)
  874. x_shape = list(x['shape'])
  875. validator.check("rank of x", len(x_shape), "expected", 1, Rel.EQ, self.name)
  876. out_shape = x_shape
  877. out = {'shape': (out_shape, out_shape),
  878. 'dtype': (x['dtype'], x['dtype']),
  879. 'value': None}
  880. return out
  881. class Split(PrimitiveWithCheck):
  882. """
  883. Splits the input tensor into output_num of tensors along the given axis and output numbers.
  884. The `input_x` tensor will be split into equally sized sub-tensors.
  885. This requires that `input_x.shape(axis)` is divisible by `output_num`.
  886. Args:
  887. axis (int): Index of the split position. Default: 0.
  888. output_num (int): The number of output tensors. Must be positive int. Default: 1.
  889. Inputs:
  890. - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  891. Outputs:
  892. tuple[Tensor], the shape of each output tensor is the same, which is
  893. :math:`(y_1, y_2, ..., y_S)`. And the data type is the same with `input_x`.
  894. Raises:
  895. TypeError: If `axis` or `output_num` is not an int.
  896. ValueError: If `axis` is out of the range [-len(`input_x.shape`), len(`input_x.shape`)),
  897. or if the `output_num` is less than or equal to 0.
  898. Supported Platforms:
  899. ``Ascend`` ``GPU`` ``CPU``
  900. Examples:
  901. >>> split = ops.Split(1, 2)
  902. >>> x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]), mindspore.int32)
  903. >>> print(x)
  904. [[1 1 1 1]
  905. [2 2 2 2]]
  906. >>> output = split(x)
  907. >>> print(output)
  908. (Tensor(shape=[2, 2], dtype=Int32, value=
  909. [[1, 1],
  910. [2, 2]]), Tensor(shape=[2, 2], dtype=Int32, value=
  911. [[1, 1],
  912. [2, 2]]))
  913. >>> split = ops.Split(1, 4)
  914. >>> output = split(x)
  915. >>> print(output)
  916. (Tensor(shape=[2, 1], dtype=Int32, value=
  917. [[1],
  918. [2]]), Tensor(shape=[2, 1], dtype=Int32, value=
  919. [[1],
  920. [2]]), Tensor(shape=[2, 1], dtype=Int32, value=
  921. [[1],
  922. [2]]), Tensor(shape=[2, 1], dtype=Int32, value=
  923. [[1],
  924. [2]]))
  925. """
  926. @prim_attr_register
  927. def __init__(self, axis=0, output_num=1):
  928. """Initialize Split"""
  929. validator.check_value_type("axis", axis, [int], self.name)
  930. validator.check_value_type("output_num", output_num, [int], self.name)
  931. validator.check_positive_int(output_num, "output_num", self.name)
  932. self.axis = axis
  933. self.output_num = output_num
  934. def __check__(self, x):
  935. validator.check_subclass("x", x['dtype'], mstype.tensor, self.name)
  936. x_shape = list(x['shape'])
  937. dim = len(x_shape)
  938. validator.check_int_range(self.axis, -dim, dim, Rel.INC_LEFT, 'axis value', self.name)
  939. if -1 not in x_shape:
  940. # only validate when shape fully known
  941. output_valid_check = x_shape[self.axis] % self.output_num
  942. if output_valid_check != 0:
  943. raise ValueError(f"For '{self.name}', the specified axis of 'input_x' should be divided exactly by "
  944. f"'output_num', but got the shape of 'input_x' in 'axis' {self.axis} is "
  945. f"{x_shape[self.axis]}, 'output_num': {self.output_num}.")
  946. size_splits = [x_shape[self.axis] // self.output_num] * self.output_num
  947. self.add_prim_attr('size_splits', size_splits)
  948. class Rank(PrimitiveWithInfer):
  949. """
  950. Returns the rank of a tensor.
  951. Returns a 0-D int32 Tensor representing the rank of input; the rank of a tensor
  952. is the number of indices required to uniquely select each element of the tensor.
  953. Inputs:
  954. - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`. The data type is Number.
  955. Outputs:
  956. Tensor. 0-D int32 Tensor representing the rank of input, i.e., :math:`R`. The data type is an int.
  957. Raises:
  958. TypeError: If `input_x` is not a Tensor.
  959. Supported Platforms:
  960. ``Ascend`` ``GPU`` ``CPU``
  961. Examples:
  962. >>> input_tensor = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
  963. >>> rank = ops.Rank()
  964. >>> output = rank(input_tensor)
  965. >>> print(output)
  966. 2
  967. >>> print(type(output))
  968. <class 'int'>
  969. """
  970. @prim_attr_register
  971. def __init__(self):
  972. """Initialize Rank"""
  973. def __infer__(self, x):
  974. validator.check_subclass("x", x['dtype'], mstype.tensor, self.name)
  975. out = {'shape': None,
  976. 'dtype': None,
  977. 'value': len(x['shape'])}
  978. return out
  979. class TruncatedNormal(PrimitiveWithInfer):
  980. """
  981. Returns a tensor of the specified shape filled with truncated normal values.
  982. The generated values follow a normal distribution.
  983. Args:
  984. seed (int): A integer number used to create random seed. Default: 0.
  985. dtype (:class:`mindspore.dtype`): Data type. Default: mindspore.float32.
  986. Inputs:
  987. - **shape** (tuple[int]) - The shape of the output tensor, is a tuple of positive integer.
  988. Outputs:
  989. Tensor, the data type of output tensor is the same as attribute `dtype`.
  990. Examples:
  991. >>> shape = (1, 2, 3)
  992. >>> truncated_normal = ops.TruncatedNormal()
  993. >>> output = truncated_normal(shape)
  994. """
  995. @prim_attr_register
  996. def __init__(self, seed=0, dtype=mstype.float32):
  997. """Initialize TruncatedNormal"""
  998. validator.check_value_type('seed', seed, [int], self.name)
  999. validator.check_types_same_and_valid({'dtype': dtype}, mstype.number_type, self.name)
  1000. def __infer__(self, shape):
  1001. shape_value = shape['value']
  1002. validator.check_value_type("shape", shape_value, [tuple], self.name)
  1003. for i, value in enumerate(shape_value):
  1004. validator.check_positive_int(value, f'{i}th value of shape', self.name)
  1005. out = {'shape': shape_value,
  1006. 'dtype': mstype.tensor_type(self.dtype),
  1007. 'value': None}
  1008. return out
  1009. class Size(PrimitiveWithInfer):
  1010. r"""
  1011. Returns the size of a Tensor.
  1012. Returns an int scalar representing the elements' size of input, the total number of elements in the tensor.
  1013. Inputs:
  1014. - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`. The data type is Number.
  1015. Outputs:
  1016. int. A scalar representing the elements' size of `input_x`, tensor is the number of elements
  1017. in a tensor, :math:`size=x_1*x_2*...x_R`. The data type is an int.
  1018. Raises:
  1019. TypeError: If `input_x` is not a Tensor.
  1020. Supported Platforms:
  1021. ``Ascend`` ``GPU`` ``CPU``
  1022. Examples:
  1023. >>> input_x = Tensor(np.array([[2, 2], [2, 2]]), mindspore.float32)
  1024. >>> size = ops.Size()
  1025. >>> output = size(input_x)
  1026. >>> print(output)
  1027. 4
  1028. """
  1029. @prim_attr_register
  1030. def __init__(self):
  1031. """Initialize Size"""
  1032. def __infer__(self, x):
  1033. size = 1
  1034. validator.check_subclass("x", x['dtype'], mstype.tensor, self.name)
  1035. shp = x['shape']
  1036. if not shp:
  1037. size = 0
  1038. else:
  1039. size = functools.reduce(lambda x, y: x * y, x['shape'])
  1040. out = {'shape': None,
  1041. 'dtype': mstype.int64,
  1042. 'value': size}
  1043. return out
  1044. class Fill(PrimitiveWithInfer):
  1045. """
  1046. Creates a tensor filled with a scalar value.
  1047. Creates a tensor with shape described by the first argument and fills it with values in the second argument.
  1048. Inputs:
  1049. - **type** (mindspore.dtype) - The specified type of output tensor. Only constant value is allowed.
  1050. - **shape** (tuple) - The specified shape of output tensor. Only constant value is allowed.
  1051. - **value** (scalar) - Value to fill the returned tensor. Only constant value is allowed.
  1052. Outputs:
  1053. Tensor, has the same type and shape as input value.
  1054. Raises:
  1055. TypeError: If `shape` is not a tuple.
  1056. Supported Platforms:
  1057. ``Ascend`` ``GPU`` ``CPU``
  1058. Examples:
  1059. >>> fill = ops.Fill()
  1060. >>> output = fill(mindspore.float32, (2, 2), 1)
  1061. >>> print(output)
  1062. [[1. 1.]
  1063. [1. 1.]]
  1064. >>> output = fill(mindspore.float32, (3, 3), 0)
  1065. >>> print(output)
  1066. [[0. 0. 0.]
  1067. [0. 0. 0.]
  1068. [0. 0. 0.]]
  1069. """
  1070. @prim_attr_register
  1071. def __init__(self):
  1072. """Initialize Fill"""
  1073. def __infer__(self, dtype, dims, x):
  1074. validator.check_value_type("shape", dims['value'], [tuple], self.name)
  1075. validator.check_value_type("value", x['value'], [numbers.Number, bool], self.name)
  1076. for i, item in enumerate(dims['value']):
  1077. validator.check_positive_int(item, f'dims[{i}]', self.name)
  1078. valid_dtypes = [mstype.bool_, mstype.int8, mstype.int16, mstype.int32, mstype.int64,
  1079. mstype.uint8, mstype.uint16, mstype.uint32, mstype.uint64,
  1080. mstype.float16, mstype.float32, mstype.float64, mstype.complex64,
  1081. mstype.complex128]
  1082. validator.check_types_same_and_valid({"value": dtype['value']}, valid_dtypes, self.name)
  1083. x_nptype = mstype.dtype_to_nptype(dtype['value'])
  1084. ret = np.full(dims['value'], x['value'], x_nptype)
  1085. out = {
  1086. 'value': Tensor(ret),
  1087. 'shape': dims['value'],
  1088. 'dtype': x['dtype'],
  1089. }
  1090. return out
  1091. class Ones(Primitive):
  1092. r"""
  1093. Creates a tensor filled with value ones.
  1094. Creates a tensor with shape described by the first argument and
  1095. fills it with value ones in type of the second argument.
  1096. Inputs:
  1097. - **shape** (Union[tuple[int], int]) - The specified shape of output tensor.
  1098. Only constant positive int is allowed.
  1099. - **type** (mindspore.dtype) - The specified type of output tensor. Only constant value is allowed.
  1100. Outputs:
  1101. Tensor, has the same type and shape as input shape value.
  1102. Raises:
  1103. TypeError: If `shape` is neither tuple nor int.
  1104. Supported Platforms:
  1105. ``Ascend`` ``GPU`` ``CPU``
  1106. Examples:
  1107. >>> ones = ops.Ones()
  1108. >>> output = ones((2, 2), mindspore.float32)
  1109. >>> print(output)
  1110. [[1. 1.]
  1111. [1. 1.]]
  1112. >>> output = ones((3, 3), mindspore.float32)
  1113. >>> print(output)
  1114. [[1. 1. 1.]
  1115. [1. 1. 1.]
  1116. [1. 1. 1.]]
  1117. """
  1118. @prim_attr_register
  1119. def __init__(self):
  1120. """Initialize Ones"""
  1121. class Zeros(Primitive):
  1122. r"""
  1123. Creates a tensor filled with value zeros.
  1124. Creates a tensor with shape described by the first argument and
  1125. fills it with value zeros in type of the second argument.
  1126. Inputs:
  1127. - **shape** (Union[tuple[int], int]) - The specified shape of output tensor.
  1128. Only constant positive int is allowed.
  1129. - **type** (mindspore.dtype) - The specified type of output tensor. Only constant value is allowed.
  1130. Outputs:
  1131. Tensor, has the same type and shape as input shape value.
  1132. Raises:
  1133. TypeError: If `shape` is neither int nor tuple.
  1134. TypeError: If `shape` is a tuple whose elements are not all int.
  1135. Supported Platforms:
  1136. ``Ascend`` ``GPU`` ``CPU``
  1137. Examples:
  1138. >>> zeros = ops.Zeros()
  1139. >>> output = zeros((2, 2), mindspore.float32)
  1140. >>> print(output)
  1141. [[0. 0.]
  1142. [0. 0.]]
  1143. """
  1144. @prim_attr_register
  1145. def __init__(self):
  1146. """Initialize Zeros"""
  1147. class OnesLike(Primitive):
  1148. """
  1149. Creates a new tensor. The values of all elements are 1.
  1150. Returns a tensor of ones with the same shape and type as the input.
  1151. Inputs:
  1152. - **input_x** (Tensor) - Input tensor.
  1153. The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
  1154. Outputs:
  1155. Tensor, has the same shape and type as `input_x` but filled with ones.
  1156. Raises:
  1157. TypeError: If `input_x` is not a Tensor.
  1158. Supported Platforms:
  1159. ``Ascend`` ``GPU`` ``CPU``
  1160. Examples:
  1161. >>> oneslike = ops.OnesLike()
  1162. >>> input_x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.int32))
  1163. >>> output = oneslike(input_x)
  1164. >>> print(output)
  1165. [[1 1]
  1166. [1 1]]
  1167. """
  1168. @prim_attr_register
  1169. def __init__(self):
  1170. """Initialize OnesLike"""
  1171. class ZerosLike(Primitive):
  1172. """
  1173. Creates a new tensor. All elements value are 0.
  1174. Returns a tensor of zeros with the same shape and data type as the input tensor.
  1175. Inputs:
  1176. - **input_x** (Tensor) - Input tensor. The data type is int32, int64, float16 or float32.
  1177. The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
  1178. Outputs:
  1179. Tensor, has the same shape and data type as `input_x` but filled with zeros.
  1180. Raises:
  1181. TypeError: If `input_x` is not a Tensor.
  1182. Supported Platforms:
  1183. ``Ascend`` ``GPU`` ``CPU``
  1184. Examples:
  1185. >>> zeroslike = ops.ZerosLike()
  1186. >>> input_x = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
  1187. >>> output = zeroslike(input_x)
  1188. >>> print(output)
  1189. [[0. 0.]
  1190. [0. 0.]]
  1191. """
  1192. @prim_attr_register
  1193. def __init__(self):
  1194. """Initialize ZerosLike"""
  1195. self.init_prim_io_names(inputs=['x'], outputs=['y'])
  1196. class TupleToArray(PrimitiveWithInfer):
  1197. """
  1198. Converts a tuple to a tensor.
  1199. If the type of the first number in the tuple is integer, the data type of the output tensor is int.
  1200. Otherwise, the data type of the output tensor is float.
  1201. Inputs:
  1202. - **input_x** (tuple) - A tuple of numbers. These numbers have the same type. Only constant value is allowed.
  1203. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  1204. Outputs:
  1205. Tensor, if the input tuple contains `N` numbers, then the shape of the output tensor is (N,).
  1206. Raises:
  1207. TypeError: If `input_x` is not a tuple.
  1208. ValueError: If length of `input_x` is less than or equal to 0.
  1209. Supported Platforms:
  1210. ``Ascend`` ``GPU`` ``CPU``
  1211. Examples:
  1212. >>> input_x = (1,2,3)
  1213. >>> print(type(input_x))
  1214. <class 'tuple'>
  1215. >>> output = ops.TupleToArray()(input_x)
  1216. >>> print(type(output))
  1217. <class 'mindspore.common.tensor.Tensor'>
  1218. >>> print(output)
  1219. [1 2 3]
  1220. """
  1221. @prim_attr_register
  1222. def __init__(self):
  1223. """Initialize TupleToArray"""
  1224. def infer_value(self, x):
  1225. validator.check_value_type("x", x, [tuple], self.name)
  1226. validator.check("size of x", len(x), '', 0, Rel.GT, self.name)
  1227. dtype = type(x[0])
  1228. for i, item in enumerate(x):
  1229. validator.check_value_type(f"x[{i}]", item, [numbers.Number], self.name)
  1230. if not all(isinstance(item, dtype) for item in x):
  1231. raise TypeError(f"For \'{self.name}\', all elements of 'input_x' must be have same type.")
  1232. if isinstance(x[0], int):
  1233. ret = np.array(x, np.int32)
  1234. else:
  1235. ret = np.array(x, np.float32)
  1236. return Tensor(ret)
  1237. def __call__(self, x):
  1238. args = list()
  1239. if isinstance(x, range):
  1240. args.append(tuple(x))
  1241. else:
  1242. args.append(x)
  1243. return _run_op(self, self.name, args)
  1244. class ScalarToArray(PrimitiveWithInfer):
  1245. """
  1246. Converts a scalar to a `Tensor`.
  1247. Inputs:
  1248. - **input_x** (Union[int, float]) - The input is a scalar. Only constant value is allowed.
  1249. Outputs:
  1250. Tensor. 0-D Tensor and the content is the input.
  1251. Raises:
  1252. TypeError: If `input_x` is neither int nor float.
  1253. Supported Platforms:
  1254. ``Ascend`` ``GPU`` ``CPU``
  1255. Examples:
  1256. >>> op = ops.ScalarToArray()
  1257. >>> input_x = 1.0
  1258. >>> print(type(input_x))
  1259. <class 'float'>
  1260. >>> output = op(input_x)
  1261. >>> print(type(output))
  1262. <class 'mindspore.common.tensor.Tensor'>
  1263. >>> print(output)
  1264. 1.0
  1265. """
  1266. @prim_attr_register
  1267. def __init__(self):
  1268. pass
  1269. def infer_value(self, x):
  1270. validator.check_value_type("x", x, [int, float], self.name)
  1271. if isinstance(x, int):
  1272. ret = np.array(x, np.int32)
  1273. else:
  1274. ret = np.array(x, np.float32)
  1275. return Tensor(ret)
  1276. class ScalarToTensor(PrimitiveWithInfer):
  1277. """
  1278. Converts a scalar to a `Tensor`, and converts the data type to the specified type.
  1279. Inputs:
  1280. - **input_x** (Union[int, float]) - The input is a scalar. Only constant value is allowed.
  1281. - **dtype** (mindspore.dtype) - The target data type. Default: mindspore.float32. Only
  1282. constant value is allowed.
  1283. Outputs:
  1284. Tensor. 0-D Tensor and the content is the input.
  1285. Raises:
  1286. TypeError: If `input_x` is neither int nor float.
  1287. Supported Platforms:
  1288. ``Ascend`` ``GPU`` ``CPU``
  1289. Examples:
  1290. >>> op = ops.ScalarToTensor()
  1291. >>> data = 1
  1292. >>> output = op(data, mindspore.float32)
  1293. >>> print(output)
  1294. 1.0
  1295. """
  1296. @prim_attr_register
  1297. def __init__(self):
  1298. pass
  1299. def infer_value(self, x, dtype=mstype.float32):
  1300. validator.check_value_type("x", x, [int, float], self.name)
  1301. validator.check_subclass("dtype", dtype, mstype.number, self.name)
  1302. data_type = mstype.dtype_to_nptype(dtype)
  1303. return Tensor(np.array(x, data_type))
  1304. class InvertPermutation(PrimitiveWithInfer):
  1305. r"""
  1306. Computes the inverse of an index permutation.
  1307. This operator is mainly used to calculate the inverse of index permutation.
  1308. It requires a 1-dimensional integer tensor x, which represents the index of a zero-based array,
  1309. and exchanges each value with its index position. In other words, For output tensor y and input tensor x,
  1310. this operation calculates the following values:
  1311. :math:`y[x[i]] = i, \quad i \in [0, 1, \ldots, \text{len}(x)-1]`.
  1312. Note:
  1313. These values must include 0. There must be no duplicate values and the
  1314. values can not be negative.
  1315. Inputs:
  1316. - **input_x** (Union(tuple[int], list[int]) - The input is constructed by multiple
  1317. integers, i.e., :math:`(y_1, y_2, ..., y_S)` representing the indices.
  1318. The values must include 0. There can be no duplicate values or negative values.
  1319. Only constant value is allowed. The maximum value must be equal to length of input_x.
  1320. Outputs:
  1321. tuple[int]. It has the same length as the input.
  1322. Raises:
  1323. TypeError: If `input_x` is neither tuple nor list.
  1324. TypeError: If element of `input_x` is not an int.
  1325. Supported Platforms:
  1326. ``Ascend`` ``GPU`` ``CPU``
  1327. Examples:
  1328. >>> invert = ops.InvertPermutation()
  1329. >>> input_data = (3, 4, 0, 2, 1)
  1330. >>> output = invert(input_data)
  1331. >>> print(output)
  1332. (2, 4, 3, 0, 1)
  1333. """
  1334. @prim_attr_register
  1335. def __init__(self):
  1336. """Initialize InvertPermutation"""
  1337. self.set_const_prim(True)
  1338. def __infer__(self, x):
  1339. x_shp = x['shape']
  1340. x_value = x['value']
  1341. if mstype.issubclass_(x['dtype'], mstype.tensor):
  1342. raise ValueError(f"For \'{self.name}\', the value of 'input_x' must be non-Tensor, but got {x['dtype']}")
  1343. if x_value is None:
  1344. raise ValueError(f"For '{self.name}', the value of 'input_x' can not be None, but got {x_value}.")
  1345. validator.check_value_type("shape", x_shp, [tuple, list], self.name)
  1346. for shp in x_shp:
  1347. if shp:
  1348. x_rank = len(np.array(x_value, np.int64).shape)
  1349. raise ValueError(f"For \'{self.name}\', the dimension of 'input_x' must be 1, but got {x_rank}.")
  1350. for i, value in enumerate(x_value):
  1351. validator.check_value_type("input[%d]" % i, value, [int], self.name)
  1352. z = [x_value[i] for i in range(len(x_value))]
  1353. z.sort()
  1354. for i in range(1, len(z)):
  1355. if z[i - 1] == z[i]:
  1356. raise ValueError(f"For '{self.name}', the 'input_x' can not contain duplicate values, "
  1357. f"but got duplicated {z[i]} in the 'input_x'.")
  1358. validator.check(f'value min', min(x_value), '', 0, Rel.EQ, self.name)
  1359. validator.check(f'value max', max(x_value), '', len(x_value) - 1, Rel.EQ, self.name)
  1360. y = [None] * len(x_value)
  1361. for i, value in enumerate(x_value):
  1362. validator.check_value_type("input[%d]" % i, value, [int], self.name)
  1363. validator.check(f'value', z[i], f'index', i, Rel.EQ, self.name)
  1364. y[value] = i
  1365. z.append(value)
  1366. return {'shape': x_shp,
  1367. 'dtype': x['dtype'],
  1368. 'value': tuple(y)}
  1369. class Argmax(PrimitiveWithInfer):
  1370. """
  1371. Returns the indices of the maximum value of a tensor across the axis.
  1372. If the shape of input tensor is :math:`(x_1, ..., x_N)`, the shape of the output tensor will be
  1373. :math:`(x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)`.
  1374. Args:
  1375. axis (int): Axis where the Argmax operation applies to. Default: -1.
  1376. output_type (:class:`mindspore.dtype`): An optional data type of `mindspore.dtype.int32`.
  1377. Default: `mindspore.dtype.int32`.
  1378. Inputs:
  1379. - **input_x** (Tensor) - Input tensor. :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
  1380. Support data type list as follows:
  1381. - Ascend: Float16, Float32.
  1382. - GPU: Float16, Float32.
  1383. - CPU: Float16, Float32, Float64.
  1384. Outputs:
  1385. Tensor, indices of the max value of input tensor across the axis.
  1386. Raises:
  1387. TypeError: If `axis` is not an int.
  1388. TypeError: If `output_type` is neither int32 nor int64.
  1389. Supported Platforms:
  1390. ``Ascend`` ``GPU`` ``CPU``
  1391. Examples:
  1392. >>> input_x = Tensor(np.array([[1, 20, 5], [67, 8, 9], [130, 24, 15]]).astype(np.float32))
  1393. >>> output = ops.Argmax(output_type=mindspore.int32)(input_x)
  1394. >>> print(output)
  1395. [1 0 0]
  1396. """
  1397. @prim_attr_register
  1398. def __init__(self, axis=-1, output_type=mstype.int32):
  1399. """Initialize Argmax"""
  1400. self.init_prim_io_names(inputs=['x'], outputs=['output'])
  1401. validator.check_value_type("axis", axis, [int], self.name)
  1402. validator.check_types_same_and_valid({'output': output_type}, [mstype.int32], self.name)
  1403. self.axis = axis
  1404. self.add_prim_attr('output_type', output_type)
  1405. def infer_shape(self, x_shape):
  1406. axis = self.axis
  1407. if axis is None:
  1408. axis = 0
  1409. x_rank = len(x_shape)
  1410. validator.check_int_range(axis, -x_rank, x_rank, Rel.INC_LEFT, "axis", self.name)
  1411. axis = axis + x_rank if axis < 0 else axis
  1412. ouput_shape = [x_shape[i] for i in range(x_rank) if i != axis]
  1413. return ouput_shape
  1414. def infer_dtype(self, x_dtype):
  1415. validator.check_tensor_dtype_valid("input_x", x_dtype, [mstype.float16, mstype.float32, mstype.float64],
  1416. self.name)
  1417. return mstype.tensor_type(self.output_type)
  1418. class Argmin(PrimitiveWithInfer):
  1419. """
  1420. Returns the indices of the minimum value of a tensor across the axis.
  1421. If the shape of input tensor is :math:`(x_1, ..., x_N)`, the shape of the output tensor is
  1422. :math:`(x_1, ..., x_{axis-1}, x_{axis+1}, ..., x_N)`.
  1423. Args:
  1424. axis (int): Axis where the Argmin operation applies to. Default: -1.
  1425. output_type (:class:`mindspore.dtype`): An optional data type of `mindspore.dtype.int32`.
  1426. Default: `mindspore.dtype.int32`.
  1427. Inputs:
  1428. - **input_x** (Tensor) - Input tensor.
  1429. The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
  1430. Outputs:
  1431. Tensor, indices of the min value of input tensor across the axis.
  1432. Raises:
  1433. TypeError: If `axis` is not an int.
  1434. TypeError: If `output_type` is neither int32 nor int64.
  1435. Supported Platforms:
  1436. ``Ascend``
  1437. Examples:
  1438. >>> input_x = Tensor(np.array([2.0, 3.1, 1.2]), mindspore.float32)
  1439. >>> index = ops.Argmin()(input_x)
  1440. >>> print(index)
  1441. 2
  1442. """
  1443. @prim_attr_register
  1444. def __init__(self, axis=-1, output_type=mstype.int32):
  1445. """Initialize Argmin"""
  1446. self.init_prim_io_names(inputs=['x'], outputs=['output'])
  1447. validator.check_value_type("axis", axis, [int], self.name)
  1448. validator.check_type_name("output_type", output_type, [mstype.int32, mstype.int64], self.name)
  1449. self.axis = axis
  1450. self.add_prim_attr('output_type', output_type)
  1451. def infer_shape(self, x_shape):
  1452. axis = self.axis
  1453. if axis is None:
  1454. axis = 0
  1455. x_rank = len(x_shape)
  1456. validator.check_int_range(axis, -x_rank, x_rank, Rel.INC_LEFT, "axis", self.name)
  1457. axis = axis + x_rank if axis < 0 else axis
  1458. ouput_shape = [x_shape[i] for i in range(x_rank) if i != axis]
  1459. return ouput_shape
  1460. def infer_dtype(self, x_dtype):
  1461. validator.check_subclass("input_x", x_dtype, mstype.tensor, self.name)
  1462. return mstype.tensor_type(self.output_type)
  1463. class ArgMaxWithValue(PrimitiveWithInfer):
  1464. """
  1465. Calculates the maximum value with the corresponding index.
  1466. Calculates the maximum value along with the given axis for the input tensor. It returns the maximum values and
  1467. indices.
  1468. Note:
  1469. In auto_parallel and semi_auto_parallel mode, the first output index can not be used.
  1470. .. warning::
  1471. - If there are multiple maximum values, the index of the first maximum value is used.
  1472. - The value range of "axis" is [-dims, dims - 1]. "dims" is the dimension length of "input_x".
  1473. Args:
  1474. axis (int): The dimension to reduce. Default: 0.
  1475. keep_dims (bool): Whether to reduce dimension, if true, the output will keep same dimension with the input,
  1476. the output will reduce dimension if false. Default: False.
  1477. Inputs:
  1478. - **input_x** (Tensor) - The input tensor, can be any dimension. Set the shape of input tensor as
  1479. :math:`(x_1, x_2, ..., x_N)`. And the data type only support mindspore.float16 or float32.
  1480. Outputs:
  1481. tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the maximum value of the input
  1482. tensor.
  1483. - index (Tensor) - The index for the maximum value of the input tensor. If `keep_dims` is true, the shape of
  1484. output tensors is :math:`(x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)`. Otherwise, the shape is
  1485. :math:`(x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)`.
  1486. - output_x (Tensor) - The maximum value of input tensor, with the same shape as index.
  1487. Raises:
  1488. TypeError: If `keep_dims` is not a bool.
  1489. TypeError: If `axis` is not an int.
  1490. Supported Platforms:
  1491. ``Ascend`` ``GPU`` ``CPU``
  1492. Examples:
  1493. >>> input_x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
  1494. >>> index, output = ops.ArgMaxWithValue()(input_x)
  1495. >>> print(index, output)
  1496. 3 0.7
  1497. >>> index, output = ops.ArgMaxWithValue(keep_dims=True)(input_x)
  1498. >>> print(index, output)
  1499. [3] [0.7]
  1500. """
  1501. @prim_attr_register
  1502. def __init__(self, axis=0, keep_dims=False):
  1503. """Initialize ArgMaxWithValue"""
  1504. self.axis = axis
  1505. self.keep_dims = keep_dims
  1506. validator.check_value_type('keep_dims', keep_dims, [bool], self.name)
  1507. validator.check_value_type('axis', axis, [int], self.name)
  1508. def infer_shape(self, x_shape):
  1509. axis = self.axis
  1510. x_rank = len(x_shape)
  1511. validator.check_int_range(axis, -x_rank, x_rank, Rel.INC_LEFT, "axis", self.name)
  1512. ouput_shape = _infer_shape_reduce(x_shape, self.axis, self.keep_dims, self.name)
  1513. return ouput_shape, ouput_shape
  1514. def infer_dtype(self, x_dtype):
  1515. validator.check_subclass("input_x", x_dtype, mstype.tensor, self.name)
  1516. return mstype.tensor_type(mstype.int32), x_dtype
  1517. class ArgMinWithValue(PrimitiveWithInfer):
  1518. """
  1519. Calculates the minimum value with corresponding index, and returns indices and values.
  1520. Calculates the minimum value along with the given axis for the input tensor. It returns the minimum values and
  1521. indices.
  1522. Note:
  1523. In auto_parallel and semi_auto_parallel mode, the first output index can not be used.
  1524. .. warning::
  1525. - If there are multiple minimum values, the index of the first minimum value is used.
  1526. - The value range of "axis" is [-dims, dims - 1]. "dims" is the dimension length of "input_x".
  1527. Args:
  1528. axis (int): The dimension to reduce. Default: 0.
  1529. keep_dims (bool): Whether to reduce dimension, if true the output will keep the same dimension as the input,
  1530. the output will reduce dimension if false. Default: False.
  1531. Inputs:
  1532. - **input_x** (Tensor) - The input tensor, can be any dimension. Set the shape of input tensor as
  1533. :math:`(x_1, x_2, ..., x_N)`.
  1534. Outputs:
  1535. tuple (Tensor), tuple of 2 tensors, containing the corresponding index and the minimum value of the input
  1536. tensor.
  1537. - index (Tensor) - The index for the minimum value of the input tensor. If `keep_dims` is true, the shape of
  1538. output tensors is :math:`(x_1, x_2, ..., x_{axis-1}, 1, x_{axis+1}, ..., x_N)`. Otherwise, the shape is
  1539. :math:`(x_1, x_2, ..., x_{axis-1}, x_{axis+1}, ..., x_N)`.
  1540. - output_x (Tensor) - The minimum value of input tensor, with the same shape as index.
  1541. Raises:
  1542. TypeError: If `keep_dims` is not a bool.
  1543. TypeError: If `axis` is not an int.
  1544. Supported Platforms:
  1545. ``Ascend`` ``GPU`` ``CPU``
  1546. Examples:
  1547. >>> input_x = Tensor(np.array([0.0, 0.4, 0.6, 0.7, 0.1]), mindspore.float32)
  1548. >>> output = ops.ArgMinWithValue()(input_x)
  1549. >>> print(output)
  1550. (Tensor(shape=[], dtype=Int32, value= 0), Tensor(shape=[], dtype=Float32, value= 0))
  1551. >>> output = ops.ArgMinWithValue(keep_dims=True)(input_x)
  1552. >>> print(output)
  1553. (Tensor(shape=[1], dtype=Int32, value= [0]), Tensor(shape=[1], dtype=Float32, value= [ 0.00000000e+00]))
  1554. """
  1555. @prim_attr_register
  1556. def __init__(self, axis=0, keep_dims=False):
  1557. """Initialize ArgMinWithValue"""
  1558. self.axis = axis
  1559. self.keep_dims = keep_dims
  1560. validator.check_value_type('keep_dims', keep_dims, [bool], self.name)
  1561. validator.check_value_type('axis', axis, [int], self.name)
  1562. def infer_shape(self, x_shape):
  1563. axis = self.axis
  1564. x_rank = len(x_shape)
  1565. validator.check_int_range(axis, -x_rank, x_rank, Rel.INC_LEFT, "axis", self.name)
  1566. ouput_shape = _infer_shape_reduce(x_shape, self.axis, self.keep_dims, self.name)
  1567. return ouput_shape, ouput_shape
  1568. def infer_dtype(self, x_dtype):
  1569. validator.check_subclass("input_x", x_dtype, mstype.tensor, self.name)
  1570. return mstype.tensor_type(mstype.int32), x_dtype
  1571. class Tile(PrimitiveWithInfer):
  1572. r"""
  1573. Replicates a tensor with given multiples times.
  1574. Creates a new tensor by replicating `input_x` `multiples` times. The i'th dimension of
  1575. output tensor has `input_x.shape(i) * multiples[i]` elements, and the values of `input_x`
  1576. are replicated `multiples[i]` times along the i'th dimension.
  1577. Note:
  1578. The length of `multiples` must be greater or equal to the length of dimension in `input_x`.
  1579. Inputs:
  1580. - **input_x** (Tensor) - 1-D or higher Tensor. Set the shape of input tensor as
  1581. :math:`(x_1, x_2, ..., x_S)`.
  1582. - **multiples** (tuple[int]) - The input tuple is constructed by multiple
  1583. integers, i.e., :math:`(y_1, y_2, ..., y_S)`. The length of `multiples`
  1584. cannot be smaller than the length of the shape of `input_x`.
  1585. Only constant value is allowed.
  1586. Outputs:
  1587. Tensor, has the same data type as the `input_x`.
  1588. - If the length of `multiples` is the same as the length of shape of `input_x`,
  1589. then the shape of their corresponding positions can be multiplied, and
  1590. the shape of Outputs is :math:`(x_1*y_1, x_2*y_2, ..., x_S*y_R)`.
  1591. - If the length of `multiples` is larger than the length of shape of `input_x`,
  1592. fill in multiple 1 in the length of the shape of `input_x` until their lengths are consistent.
  1593. Such as set the shape of `input_x` as :math:`(1, ..., x_1, x_2, ..., x_S)`,
  1594. then the shape of their corresponding positions can be multiplied, and
  1595. the shape of Outputs is :math:`(1*y_1, ..., x_S*y_R)`.
  1596. Raises:
  1597. TypeError: If `multiples` is not a tuple or its elements are not all int.
  1598. ValueError: If the elements of `multiples` are not all greater than 0.
  1599. ValueError: If the length of `multiples` are smaller than the length of dimension in `input_x`.
  1600. Supported Platforms:
  1601. ``Ascend`` ``GPU`` ``CPU``
  1602. Examples:
  1603. >>> tile = ops.Tile()
  1604. >>> input_x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.float32)
  1605. >>> multiples = (2, 3)
  1606. >>> output = tile(input_x, multiples)
  1607. >>> print(output)
  1608. [[1. 2. 1. 2. 1. 2.]
  1609. [3. 4. 3. 4. 3. 4.]
  1610. [1. 2. 1. 2. 1. 2.]
  1611. [3. 4. 3. 4. 3. 4.]]
  1612. >>> multiples = (2, 3, 2)
  1613. >>> output = tile(input_x, multiples)
  1614. >>> print(output)
  1615. [[[1. 2. 1. 2.]
  1616. [3. 4. 3. 4.]
  1617. [1. 2. 1. 2.]
  1618. [3. 4. 3. 4.]
  1619. [1. 2. 1. 2.]
  1620. [3. 4. 3. 4.]]
  1621. [[1. 2. 1. 2.]
  1622. [3. 4. 3. 4.]
  1623. [1. 2. 1. 2.]
  1624. [3. 4. 3. 4.]
  1625. [1. 2. 1. 2.]
  1626. [3. 4. 3. 4.]]]
  1627. """
  1628. @prim_attr_register
  1629. def __init__(self):
  1630. """Initialize Tile"""
  1631. self.init_prim_io_names(inputs=['x', 'multiples'], outputs=['output'])
  1632. def check_elim(self, base_tensor, multiplier):
  1633. if not isinstance(base_tensor, Tensor):
  1634. raise TypeError(f"For '{self.name}', the type of 'input_x' should be Tensor, "
  1635. f"but got {type(base_tensor).__name__}.")
  1636. if all(v == 1 for v in multiplier):
  1637. return (True, base_tensor)
  1638. return (False, None)
  1639. def __infer__(self, x, multiples):
  1640. multiples_v = multiples['value']
  1641. if multiples_v is None:
  1642. if len(multiples['shape']) != 1:
  1643. raise ValueError(f'For \'{self.name}\' the dim of multiples must be 1.')
  1644. rank = max(len(x['shape']), multiples['shape'][0])
  1645. out_shape = [-1] * rank
  1646. # tile can't infer min/max shape if multiples_v is None
  1647. return {'shape': out_shape,
  1648. 'dtype': x['dtype'],
  1649. 'value': None,
  1650. 'min_shape': [1] * rank,
  1651. 'max_shape': [1] * rank
  1652. }
  1653. x_shp = x['shape']
  1654. validator.check_value_type(
  1655. "multiples", multiples_v, [tuple], self.name)
  1656. for i, multiple in enumerate(multiples_v):
  1657. validator.check_positive_int(
  1658. multiple, "multiples[%d]" % i, self.name)
  1659. validator.check_value_type(
  1660. "x[\'dtype\']", x["dtype"], mstype.tensor_type, self.name)
  1661. len_sub = len(multiples_v) - len(x_shp)
  1662. multiples_w = None
  1663. if len_sub == 0:
  1664. multiples_w = multiples_v
  1665. if len_sub > 0:
  1666. for i in range(0, len_sub):
  1667. x_shp.insert(0, 1)
  1668. multiples_w = multiples_v
  1669. elif len_sub < 0:
  1670. raise ValueError(f"For '{self.name}', the length of 'multiples' can not be smaller than "
  1671. f"the dimension of 'input_x', but got length of 'multiples': {len(multiples_v)} "
  1672. f"and dimension of 'input_x': {len(x_shp)}.")
  1673. for i, a in enumerate(multiples_w):
  1674. x_shp[i] *= a
  1675. value = None
  1676. if x['value'] is not None:
  1677. value = Tensor(np.tile(x['value'].asnumpy(), multiples_w))
  1678. return {'shape': x_shp,
  1679. 'dtype': x['dtype'],
  1680. 'value': value}
  1681. class UnsortedSegmentSum(PrimitiveWithInfer):
  1682. r"""
  1683. Computes the sum of a tensor along segments.
  1684. Calculates a tensor such that :math:`\text{output}[i] = \sum_{segment\_ids[j] == i} \text{data}[j, \ldots]`, where
  1685. :math:`j` is a tuple describing the index of element in data. `segment_ids` selects which elements in data to sum
  1686. up. Segment_ids does not need to be sorted, and it does not need to cover all values in the entire valid value
  1687. range.
  1688. The following figure shows the calculation process of UnsortedSegmentSum:
  1689. .. image:: api_img/UnsortedSegmentSum.png
  1690. Note:
  1691. If the segment_id i is absent in the segment_ids, then output[i] will be filled with 0.
  1692. If the sum of the given segment_ids :math:`i` is empty, then :math:`\text{output}[i] = 0`. If the given segment_ids
  1693. is negative, the value will be ignored. 'num_segments' must be equal to the number of different segment_ids.
  1694. Inputs:
  1695. - **input_x** (Tensor) - The shape is :math:`(x_1, x_2, ..., x_R)`.
  1696. - **segment_ids** (Tensor) - Set the shape as :math:`(x_1, x_2, ..., x_N)`, where 0 < N <= R.
  1697. - **num_segments** (int) - Set :math:`z` as num_segments.
  1698. Outputs:
  1699. Tensor, the shape is :math:`(z, x_{N+1}, ..., x_R)`.
  1700. Raises:
  1701. TypeError: If `num_segments` is not an int.
  1702. ValueError: If length of shape of `segment_ids` is less than 1.
  1703. Supported Platforms:
  1704. ``Ascend`` ``GPU`` ``CPU``
  1705. Examples:
  1706. >>> input_x = Tensor([1, 2, 3, 4], mindspore.float32)
  1707. >>> segment_ids = Tensor([0, 0, 1, 2], mindspore.int32)
  1708. >>> num_segments = 4
  1709. >>> output = ops.UnsortedSegmentSum()(input_x, segment_ids, num_segments)
  1710. >>> print(output)
  1711. [3. 3. 4. 0.]
  1712. >>> input_x = Tensor([1, 2, 3, 4, 2, 5], mindspore.float32)
  1713. >>> segment_ids = Tensor([0, 0, 1, 2, 3, 4], mindspore.int32)
  1714. >>> num_segments = 6
  1715. >>> output = ops.UnsortedSegmentSum()(input_x, segment_ids, num_segments)
  1716. >>> print(output)
  1717. [3. 3. 4. 2. 5. 0.]
  1718. """
  1719. @prim_attr_register
  1720. def __init__(self):
  1721. """Initialize UnsortedSegmentSum"""
  1722. self.init_prim_io_names(inputs=['x', 'segment_ids', 'num_segments'], outputs=['y'])
  1723. def __infer__(self, x, segment_ids, num_segments):
  1724. x_type = x['dtype']
  1725. x_shp = x['shape']
  1726. validator.check_subclass("input_x", x_type, mstype.tensor, self.name)
  1727. validator.check_value_type("x_shape", x_shp, [list], self.name)
  1728. x_shp_len = len(x_shp)
  1729. validator.check_positive_int(x_shp_len, "rank of input_x", self.name)
  1730. segment_ids_shp = segment_ids['shape']
  1731. segment_ids_type = segment_ids['dtype']
  1732. validator.check_subclass("segment_ids", segment_ids_type, mstype.tensor, self.name)
  1733. validator.check_value_type("segment_ids", segment_ids_shp, [list], self.name)
  1734. segment_ids_shp_len = len(segment_ids_shp)
  1735. validator.check_positive_int(segment_ids_shp_len, "rank of segment_ids", self.name)
  1736. validator.check(f'rank of input_x', len(x_shp),
  1737. 'rank of segments_id', len(segment_ids_shp), Rel.GE, self.name)
  1738. if -1 not in x_shp and -1 not in segment_ids_shp:
  1739. # only validate when both shapes fully known
  1740. for i, value in enumerate(segment_ids_shp):
  1741. validator.check("ids[%d]" % i, value, 'input[%d]' % i, x_shp[i], Rel.EQ, self.name)
  1742. num_segments_v = num_segments['value']
  1743. num_segments_type = num_segments['dtype']
  1744. validator.check_subclass("num_segments", num_segments_type, [mstype.tensor, mstype.number], self.name)
  1745. if isinstance(num_segments_type, type(mstype.tensor)):
  1746. validator.check_tensor_dtype_valid("num_segments", num_segments_type, [mstype.int32, mstype.int64],
  1747. self.name)
  1748. shp = [-1]
  1749. else:
  1750. validator.check_value_type('num_segments', num_segments_v, [int], self.name)
  1751. validator.check_positive_int(num_segments_v, "num_segments", self.name)
  1752. shp = [num_segments_v]
  1753. shp += x_shp[segment_ids_shp_len:]
  1754. if "max_value" in num_segments and "min_value" in num_segments:
  1755. output_max_shape = list(num_segments['max_value'])
  1756. output_min_shape = list(num_segments['min_value'])
  1757. else:
  1758. if isinstance(num_segments_type, type(mstype.tensor)):
  1759. raise ValueError(f"For '{self.name}', the dtype of 'num_segments' only support int type "
  1760. f"when it is not a dynamic value, but got type of 'num_segments': "
  1761. f"{num_segments_type}.")
  1762. output_max_shape = [num_segments_v]
  1763. output_min_shape = [num_segments_v]
  1764. if 'max_shape' in x and 'min_shape' in x:
  1765. max_output_incoming = x['max_shape']
  1766. min_output_incoming = x['min_shape']
  1767. else:
  1768. max_output_incoming = x_shp
  1769. min_output_incoming = x_shp
  1770. output_max_shape += max_output_incoming[segment_ids_shp_len:]
  1771. output_min_shape += min_output_incoming[segment_ids_shp_len:]
  1772. return {'shape': shp,
  1773. 'max_shape': output_max_shape,
  1774. 'min_shape': output_min_shape,
  1775. 'dtype': mstype.tensor_type(x_type.element_type()),
  1776. 'value': None}
  1777. class UnsortedSegmentMin(PrimitiveWithCheck):
  1778. r"""
  1779. Computes the minimum of a tensor along segments.
  1780. The following figure shows the calculation process of UnsortedSegmentMin:
  1781. .. image:: api_img/UnsortedSegmentMin.png
  1782. .. math::
  1783. \text { output }_i=\text{min}_{j \ldots} \text { data }[j \ldots]
  1784. where :math:`min` over tuples :math:`j...` such that :math:`segment_ids[j...] == i`.
  1785. Note:
  1786. If the segment_id i is absent in the segment_ids, then output[i] will be filled with
  1787. the maximum value of the input_x's type.
  1788. The `segment_ids` must be non-negative tensor.
  1789. Inputs:
  1790. - **input_x** (Tensor) - The shape is :math:`(x_1, x_2, ..., x_R)`.
  1791. The data type must be float16, float32 or int32.
  1792. - **segment_ids** (Tensor) - A `1-D` tensor whose shape is :math:`(x_1)`, the value must be non-negative tensor.
  1793. The data type must be int32.
  1794. - **num_segments** (int) - The value specifies the number of distinct `segment_ids`.
  1795. Outputs:
  1796. Tensor, set the number of `num_segments` as `N`, the shape is :math:`(N, x_2, ..., x_R)`.
  1797. Raises:
  1798. TypeError: If `num_segments` is not an int.
  1799. ValueError: If length of shape of `segment_ids` is not equal to 1.
  1800. Supported Platforms:
  1801. ``Ascend`` ``GPU``
  1802. Examples:
  1803. >>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
  1804. >>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
  1805. >>> num_segments = 2
  1806. >>> unsorted_segment_min = ops.UnsortedSegmentMin()
  1807. >>> output = unsorted_segment_min(input_x, segment_ids, num_segments)
  1808. >>> print(output)
  1809. [[1. 2. 3.]
  1810. [4. 2. 1.]]
  1811. """
  1812. @prim_attr_register
  1813. def __init__(self):
  1814. """Initialize UnsortedSegmentMin"""
  1815. self.init_prim_io_names(inputs=['x', 'segment_ids', 'num_segments'], outputs=['y'])
  1816. def __check__(self, x, segment_ids, num_segments):
  1817. x_shape = x['shape']
  1818. segment_ids_shape = segment_ids['shape']
  1819. valid_type = [mstype.float16, mstype.float32, mstype.int32]
  1820. validator.check_tensor_dtype_valid("x", x['dtype'], valid_type, self.name)
  1821. validator.check_tensor_dtype_valid("segment_ids", segment_ids['dtype'], [mstype.int32], self.name)
  1822. validator.check_equal_int(len(segment_ids_shape), 1, "rank of segment_ids_shape", self.name)
  1823. num_segments_type = num_segments['dtype']
  1824. validator.check_subclass("num_segments", num_segments_type, [mstype.number], self.name)
  1825. if -1 not in x_shape and -1 not in segment_ids_shape:
  1826. # only validate when both shapes fully known
  1827. validator.check(f'first shape of input_x', x_shape[0],
  1828. 'length of segments_id', segment_ids_shape[0], Rel.EQ, self.name)
  1829. num_segments_v = num_segments['value']
  1830. validator.check_value_type('num_segments', num_segments_v, [int], self.name)
  1831. validator.check_positive_int(num_segments_v, "num_segments", self.name)
  1832. class UnsortedSegmentMax(PrimitiveWithCheck):
  1833. r"""
  1834. Computes the maximum along segments of a tensor.
  1835. The following figure shows the calculation process of UnsortedSegmentMax:
  1836. .. image:: api_img/UnsortedSegmentMax.png
  1837. .. math::
  1838. \text { output }_i=\text{max}_{j \ldots} \text { data }[j \ldots]
  1839. where :math:`max` over tuples :math:`j...` such that :math:`segment\_ids[j...] == i`.
  1840. Note:
  1841. If the segment_id i is absent in the segment_ids, then output[i] will be filled with
  1842. the minimum value of the input_x's type.
  1843. Inputs:
  1844. - **input_x** (Tensor) - The shape is :math:`(x_1, x_2, ..., x_R)`.
  1845. The data type must be float16, float32 or int32.
  1846. - **segment_ids** (Tensor) - A `1-D` tensor whose shape is :math:`(x_1)`, the value must be non-negative tensor.
  1847. The data type must be int32.
  1848. - **num_segments** (int) - The value specifies the number of distinct `segment_ids`.
  1849. Outputs:
  1850. Tensor, set the number of `num_segments` as `N`, the shape is :math:`(N, x_2, ..., x_R)`.
  1851. Raises:
  1852. TypeError: If `num_segments` is not an int.
  1853. ValueError: If length of shape of `segment_ids` is not equal to 1.
  1854. Supported Platforms:
  1855. ``Ascend`` ``GPU``
  1856. Examples:
  1857. >>> # case 1: Only have two num_segments, where is 0 and 1, and segment_ids=[0, 1, 1]
  1858. >>> # num_segments = 2 indicates that there are two types of segment_id,
  1859. >>> # the first number '0' in [0, 1, 1] indicates input_x[0],
  1860. >>> # the second number '1' in [0, 1, 1] indicates input_x[1],
  1861. >>> # the third number '1' in [0, 1, 1] indicates input_x[2],
  1862. >>> # input_x[0], which is [1, 2, 3] will not be compared to other segment_id.
  1863. >>> # Only the same segment_id will be compared.
  1864. >>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
  1865. >>> segment_ids = Tensor(np.array([0, 1, 1]).astype(np.int32))
  1866. >>> num_segments = 2
  1867. >>> unsorted_segment_max = ops.UnsortedSegmentMax()
  1868. >>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
  1869. >>> print(output)
  1870. [[1. 2. 3.]
  1871. [4. 5. 6.]]
  1872. >>>
  1873. >>> # case 2: The segment_ids=[0, 0, 1, 1].
  1874. >>> # [1, 2, 3] will compare with [4, 2, 0],
  1875. >>> # and [4, 5, 6] will compare with [4, 2, 1].
  1876. >>> input_x = Tensor(np.array([[1, 2, 3], [4, 2, 0], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
  1877. >>> segment_ids = Tensor(np.array([0, 0, 1, 1]).astype(np.int32))
  1878. >>> num_segments = 2
  1879. >>> unsorted_segment_max = ops.UnsortedSegmentMax()
  1880. >>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
  1881. >>> print(input_x.shape)
  1882. (4, 3)
  1883. >>> print(output)
  1884. [[4. 2. 3.]
  1885. [4. 5. 6.]]
  1886. >>> # case 3: If the input_x have three dimensions even more, what will happen?
  1887. >>> # The shape of input_x is (2, 4, 3),
  1888. >>> # and the length of segment_ids should be the same as the first dimension of input_x.
  1889. >>> # Because the segment_ids are different, input_x[0] will not be compared to input_x[1].
  1890. >>> input_x = Tensor(np.array([[[1, 2, 3], [4, 2, 0], [4, 5, 6], [4, 2, 1]],
  1891. >>> [[1, 2, 3], [4, 2, 0], [4, 5, 6], [4, 2, 1]]]).astype(np.float32))
  1892. >>> segment_ids = Tensor(np.array([0, 1]).astype(np.int32))
  1893. >>> num_segments = 2
  1894. >>> unsorted_segment_max = ops.UnsortedSegmentMax()
  1895. >>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
  1896. >>> print(input_x.shape)
  1897. (2, 4, 3)
  1898. >>> print(output)
  1899. [[[1. 2. 3.]
  1900. [4. 2. 0.]
  1901. [4. 5. 6.]
  1902. [4. 2. 1.]]
  1903. [[1. 2. 3.]
  1904. [4. 2. 0.]
  1905. [4. 5. 6.]
  1906. [4. 2. 1.]]]
  1907. >>> # case 4: It has the same input with the 3rd case.
  1908. >>> # Because num_segments is equal to 2, there are two segment_ids, but currently only one 0 is used.
  1909. >>> # the segment_id i is absent in the segment_ids, then output[i] will be filled with
  1910. >>> # the smallest possible value of the input_x's type.
  1911. >>> segment_ids = Tensor(np.array([0, 0]).astype(np.int32))
  1912. >>> output = unsorted_segment_max(input_x, segment_ids, num_segments)
  1913. >>> print(output)
  1914. [[[ 1.0000000e+00 2.0000000e+00 3.0000000e+00]
  1915. [ 4.0000000e+00 2.0000000e+00 0.0000000e+00]
  1916. [ 4.0000000e+00 5.0000000e+00 6.0000000e+00]
  1917. [ 4.0000000e+00 2.0000000e+00 1.0000000e+00]]
  1918. [[-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
  1919. [-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
  1920. [-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]
  1921. [-3.4028235e+38 -3.4028235e+38 -3.4028235e+38]]]
  1922. """
  1923. @prim_attr_register
  1924. def __init__(self):
  1925. """Initialize UnsortedSegmentMax"""
  1926. self.init_prim_io_names(inputs=['x', 'segment_ids', 'num_segments'], outputs=['y'])
  1927. def __check__(self, x, segment_ids, num_segments):
  1928. x_shape = x['shape']
  1929. segment_ids_shape = segment_ids['shape']
  1930. valid_type = [mstype.float16, mstype.float32, mstype.int32]
  1931. validator.check_tensor_dtype_valid("x", x['dtype'], valid_type, self.name)
  1932. validator.check_tensors_dtypes_same_and_valid({"segment_ids": segment_ids['dtype']},
  1933. [mstype.int32, mstype.int64], self.name)
  1934. validator.check_equal_int(len(segment_ids_shape), 1, "rank of segment_ids_shape", self.name)
  1935. num_segments_type = num_segments['dtype']
  1936. validator.check_subclass("num_segments", num_segments_type, [mstype.number], self.name)
  1937. if -1 not in x_shape and -1 not in segment_ids_shape:
  1938. # only validate when both shapes fully known
  1939. validator.check(f'first shape of input_x', x_shape[0],
  1940. 'length of segments_id', segment_ids_shape[0], Rel.EQ, self.name)
  1941. num_segments_v = num_segments['value']
  1942. validator.check_value_type('num_segments', num_segments_v, [int], self.name)
  1943. validator.check_positive_int(num_segments_v, "num_segments", self.name)
  1944. class UnsortedSegmentProd(PrimitiveWithInfer):
  1945. """
  1946. Computes the product of a tensor along segments.
  1947. The following figure shows the calculation process of UnsortedSegmentProd:
  1948. .. image:: api_img/UnsortedSegmentProd.png
  1949. Inputs:
  1950. - **input_x** (Tensor) - The shape is :math:`(x_1, x_2, ..., x_R)`.
  1951. With float16, float32 or int32 data type.
  1952. - **segment_ids** (Tensor) - A `1-D` tensor whose shape is :math:`(x_1)`, the value must be non-negative tensor.
  1953. Data type must be int32.
  1954. - **num_segments** (int) - The value specifies the number of distinct `segment_ids`,
  1955. must be greater than 0.
  1956. Outputs:
  1957. Tensor, set the number of `num_segments` as `N`, the shape is :math:`(N, x_2, ..., x_R)`.
  1958. Raises:
  1959. TypeError: If `num_segments` is not an int.
  1960. ValueError: If length of shape of `segment_ids` is not equal to 1.
  1961. Supported Platforms:
  1962. ``Ascend``
  1963. Examples:
  1964. >>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [4, 2, 1]]).astype(np.float32))
  1965. >>> segment_ids = Tensor(np.array([0, 1, 0]).astype(np.int32))
  1966. >>> num_segments = 2
  1967. >>> unsorted_segment_prod = ops.UnsortedSegmentProd()
  1968. >>> output = unsorted_segment_prod(input_x, segment_ids, num_segments)
  1969. >>> print(output)
  1970. [[4. 4. 3.]
  1971. [4. 5. 6.]]
  1972. """
  1973. @prim_attr_register
  1974. def __init__(self):
  1975. """Initialize UnsortedSegmentProd"""
  1976. self.init_prim_io_names(inputs=['x', 'segment_ids', 'num_segments'], outputs=['y'])
  1977. def __infer__(self, x, segment_ids, num_segments):
  1978. x_type = x['dtype']
  1979. x_shape = x['shape']
  1980. segment_ids_shape = segment_ids['shape']
  1981. validator.check_subclass("input_x", x_type, mstype.tensor, self.name)
  1982. validator.check_value_type("x_shape", x_shape, [list], self.name)
  1983. valid_type = [mstype.float16, mstype.float32, mstype.int32]
  1984. validator.check_tensor_dtype_valid("x", x['dtype'], valid_type, self.name)
  1985. validator.check_tensor_dtype_valid("segment_ids", segment_ids['dtype'], [mstype.int32], self.name)
  1986. validator.check_equal_int(len(segment_ids_shape), 1, "rank of segment_ids_shape", self.name)
  1987. validator.check(f'first shape of input_x', x_shape[0],
  1988. 'length of segments_id', segment_ids_shape[0], Rel.EQ, self.name)
  1989. num_segments_v = num_segments['value']
  1990. validator.check_value_type('num_segments', num_segments_v, [int], self.name)
  1991. validator.check_positive_int(num_segments_v, "num_segments", self.name)
  1992. segment_ids_shape_len = len(segment_ids_shape)
  1993. out_shape = [num_segments_v]
  1994. out_shape += x_shape[segment_ids_shape_len:]
  1995. out = {'shape': out_shape,
  1996. 'dtype': mstype.tensor_type(x_type.element_type()),
  1997. 'value': None}
  1998. return out
  1999. class Concat(PrimitiveWithInfer):
  2000. r"""
  2001. Connect tensor in the specified axis.
  2002. Connect input tensors along with the given axis.
  2003. The input data is a tuple of tensors. These tensors have the same rank `R`. Set the given axis as `m`, and
  2004. :math:`0 \le m < R`. Set the number of input tensors as `N`. For the :math:`i`-th tensor :math:`t_i`, it has
  2005. the shape of :math:`(x_1, x_2, ..., x_{mi}, ..., x_R)`. :math:`x_{mi}` is the :math:`m`-th dimension of the
  2006. :math:`i`-th tensor. Then, the shape of the output tensor is
  2007. .. math::
  2008. (x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)
  2009. .. warning::
  2010. The value range of "axis" is [-dims, dims - 1]. "dims" is the dimension length of "input_x".
  2011. Args:
  2012. axis (int): The specified axis. Default: 0.
  2013. Inputs:
  2014. - **input_x** (tuple, list) - A tuple or a list of input tensors.
  2015. Suppose there are two tensors in this tuple or list, namely x1 and x2.
  2016. To perform `Concat` in the axis 0 direction, except for the 0th axis, all other axes should be equal,
  2017. that is, :math:`x1.shape[1] == x2.shape[1], x1.shape[2] == x2.shape[2], ..., x1.shape[R] == x2.shape[R]`,
  2018. where the :math:`R` indicates the last axis.
  2019. Outputs:
  2020. - Tensor, the shape is :math:`(x_1, x_2, ..., \sum_{i=1}^Nx_{mi}, ..., x_R)`.
  2021. The data type is the same with `input_x`.
  2022. Raises:
  2023. TypeError: If `axis` is not an int.
  2024. Supported Platforms:
  2025. ``Ascend`` ``GPU`` ``CPU``
  2026. Examples:
  2027. >>> input_x1 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
  2028. >>> input_x2 = Tensor(np.array([[0, 1], [2, 1]]).astype(np.float32))
  2029. >>> op = ops.Concat()
  2030. >>> output = op((input_x1, input_x2))
  2031. >>> print(output)
  2032. [[0. 1.]
  2033. [2. 1.]
  2034. [0. 1.]
  2035. [2. 1.]]
  2036. >>> op = ops.Concat(1)
  2037. >>> output = op((input_x1, input_x2))
  2038. >>> print(output)
  2039. [[0. 1. 0. 1.]
  2040. [2. 1. 2. 1.]]
  2041. """
  2042. @prim_attr_register
  2043. def __init__(self, axis=0):
  2044. """Initialize Concat"""
  2045. validator.check_value_type("axis", axis, [int], self.name)
  2046. def __infer__(self, input_x):
  2047. axis = self.axis
  2048. x_shp = input_x['shape']
  2049. x_type = input_x['dtype']
  2050. _, all_shp, _ = get_concat_offset(x_shp, x_type, axis, self.name)
  2051. self.add_prim_attr('inputNums', len(x_shp))
  2052. ret_shp = x_shp[0].copy()
  2053. value = None
  2054. if input_x['value'] is not None:
  2055. value = Tensor(np.concatenate([x.asnumpy() for x in input_x['value']], axis=axis))
  2056. ret_shp[axis] = all_shp
  2057. out = {'shape': ret_shp,
  2058. 'dtype': x_type[0],
  2059. 'value': value}
  2060. if -1 in x_shp[0]:
  2061. x_min_shp = input_x['min_shape']
  2062. ret_min_shp = x_min_shp[0].copy()
  2063. ret_min_shp[axis] = 0
  2064. for all_min_shp in x_min_shp:
  2065. ret_min_shp[axis] += all_min_shp[axis]
  2066. out['min_shape'] = ret_min_shp
  2067. x_max_shp = input_x['max_shape']
  2068. ret_max_shp = x_max_shp[0].copy()
  2069. ret_max_shp[axis] = 0
  2070. for all_max_shp in x_max_shp:
  2071. ret_max_shp[axis] += all_max_shp[axis]
  2072. out['max_shape'] = ret_max_shp
  2073. return out
  2074. class ParallelConcat(PrimitiveWithInfer):
  2075. r"""
  2076. Concats tensor in the first dimension.
  2077. Concats input tensors along with the first dimension.
  2078. The difference between Concat and ParallelConcat is that Concat requires all of the inputs be computed
  2079. before the operation will begin but doesn't require that the input shapes be known during graph construction.
  2080. Parallel concat will copy pieces of the input into the output as they become available, in some situations
  2081. this can provide a performance benefit.
  2082. Note:
  2083. The input tensors are all required to have size 1 in the first dimension.
  2084. Inputs:
  2085. - **values** (tuple, list) - A tuple or a list of input tensors. The data type and shape of these
  2086. tensors must be the same. The data type is Number except float64.
  2087. Outputs:
  2088. Tensor, data type is the same as `values`.
  2089. Raises:
  2090. ValueError: If length of shape of `values` is less than 1.
  2091. ValueError: The data type and shape of these tensors are not the same.
  2092. Supported Platforms:
  2093. ``Ascend``
  2094. Examples:
  2095. >>> data1 = Tensor(np.array([[0, 1]]).astype(np.int32))
  2096. >>> data2 = Tensor(np.array([[2, 1]]).astype(np.int32))
  2097. >>> op = ops.ParallelConcat()
  2098. >>> output = op((data1, data2))
  2099. >>> print(output)
  2100. [[0 1]
  2101. [2 1]]
  2102. """
  2103. @prim_attr_register
  2104. def __init__(self):
  2105. """Initialize ParallelConcat"""
  2106. def __infer__(self, values):
  2107. x_shp = values['shape']
  2108. x_type = values['dtype']
  2109. validator.check_int(len(x_shp), 1, Rel.GE, f'x_shp length', self.name)
  2110. args = {f"x_type[{i}]": elem for i, elem in enumerate(x_type)}
  2111. validator.check_tensors_dtypes_same_and_valid(args, mstype.number_type + (mstype.bool_,), self.name)
  2112. first_elem = x_shp[0]
  2113. for i, elem in enumerate(x_shp[1:]):
  2114. j = i + 1
  2115. validator.check_equal_int(elem[0], 1, f'x_shp[{j}][0]', self.name)
  2116. validator.check(f"x_shp[0] shape", first_elem, f"x_shp[{j}] shape", elem, Rel.EQ, self.name)
  2117. ret_shp = x_shp[0].copy()
  2118. ret_shp[0] = len(x_shp)
  2119. self.add_prim_attr('shape', ret_shp)
  2120. self.add_prim_attr('N', len(x_shp))
  2121. out = {'shape': ret_shp,
  2122. 'dtype': x_type[0],
  2123. 'value': None}
  2124. return out
  2125. def _get_stack_shape(x_shape, x_type, axis, prim_name):
  2126. """for stack output shape"""
  2127. validator.check_value_type("shape", x_shape, [tuple, list], prim_name)
  2128. validator.check_int(len(x_shape), 1, Rel.GE, "len of input_x", prim_name)
  2129. validator.check_subclass("input_x[0]", x_type[0], mstype.tensor, prim_name)
  2130. rank_base = len(x_shape[0])
  2131. n = len(x_shape)
  2132. out_shape = x_shape[0]
  2133. validator.check_int_range(axis, -rank_base - 1, rank_base, Rel.INC_BOTH, 'axis', prim_name)
  2134. if axis < 0:
  2135. axis = axis + rank_base + 1
  2136. for i in range(1, n):
  2137. validator.check('x_type[%d]' % i, x_type[i], 'base', x_type[0], Rel.EQ, prim_name, TypeError)
  2138. if x_shape[i] != x_shape[0]:
  2139. raise ValueError(f"For \'{prim_name}\' element {i} shape in input can not pack with first element")
  2140. out_shape.insert(axis, n)
  2141. return out_shape
  2142. class Pack(PrimitiveWithInfer):
  2143. """
  2144. Same as operator Stack. Pack will be deprecated in the future.
  2145. Please use Stack instead.
  2146. """
  2147. @deprecated("1.1", "Stack", True)
  2148. @prim_attr_register
  2149. def __init__(self, axis=0):
  2150. """Initialize Pack"""
  2151. validator.check_value_type("axis", axis, [int], self.name)
  2152. self.axis = axis
  2153. def __infer__(self, value):
  2154. x_shape = value['shape']
  2155. x_type = value['dtype']
  2156. self.add_prim_attr('num', len(x_shape))
  2157. all_shape = _get_stack_shape(x_shape, x_type, self.axis, self.name)
  2158. out = {'shape': all_shape,
  2159. 'dtype': x_type[0],
  2160. 'value': None}
  2161. return out
  2162. class Stack(PrimitiveWithInfer):
  2163. r"""
  2164. Stacks a list of tensors in specified axis.
  2165. Stacks the list of input tensors with the same rank `R`, output is a tensor of rank `(R+1)`.
  2166. Given input tensors of shape :math:`(x_1, x_2, ..., x_R)`. Set the number of input tensors as `N`.
  2167. If :math:`0 \le axis`, the shape of the output tensor is
  2168. :math:`(x_1, x_2, ..., x_{axis}, N, x_{axis+1}, ..., x_R)`.
  2169. Args:
  2170. axis (int): Dimension to stack. Default: 0.
  2171. Negative values wrap around. The range is [-(R+1), R+1).
  2172. Inputs:
  2173. - **input_x** (Union[tuple, list]) - A Tuple or list of Tensor objects with the same shape and type.
  2174. Outputs:
  2175. Tensor. A stacked Tensor with the same type as `input_x`.
  2176. Raises:
  2177. TypeError: If the data types of elements in `input_x` are not the same.
  2178. ValueError: If the length of `input_x` is not greater than 1;
  2179. or if axis is out of the range [-(R+1), R+1);
  2180. or if the shapes of elements in input_x are not the same.
  2181. Supported Platforms:
  2182. ``Ascend`` ``GPU`` ``CPU``
  2183. Examples:
  2184. >>> data1 = Tensor(np.array([0, 1]).astype(np.float32))
  2185. >>> data2 = Tensor(np.array([2, 3]).astype(np.float32))
  2186. >>> stack = ops.Stack()
  2187. >>> output = stack([data1, data2])
  2188. >>> print(output)
  2189. [[0. 1.]
  2190. [2. 3.]]
  2191. """
  2192. @prim_attr_register
  2193. def __init__(self, axis=0):
  2194. """Initialize Stack"""
  2195. validator.check_value_type("axis", axis, [int], self.name)
  2196. self.axis = axis
  2197. def __infer__(self, value):
  2198. x_shape = value['shape']
  2199. x_type = value['dtype']
  2200. self.add_prim_attr('num', len(x_shape))
  2201. all_shape = _get_stack_shape(x_shape, x_type, self.axis, self.name)
  2202. tuple_value = value['value']
  2203. input_array = []
  2204. infered_value = None
  2205. if tuple_value is not None:
  2206. for item in tuple_value:
  2207. npy_item = item.asnumpy()
  2208. input_array.append(npy_item)
  2209. infered_value = Tensor(np.stack(input_array, axis=self.axis))
  2210. out = {'shape': all_shape,
  2211. 'dtype': x_type[0],
  2212. 'value': infered_value}
  2213. return out
  2214. class Unpack(PrimitiveWithInfer):
  2215. """
  2216. Same as operator Unstack. Unpack will be deprecated in the future.
  2217. Please use Unstack instead.
  2218. """
  2219. @deprecated("1.1", "Unstack", True)
  2220. @prim_attr_register
  2221. def __init__(self, axis=0):
  2222. """Initialize Unpack"""
  2223. validator.check_value_type("axis", axis, [int], self.name)
  2224. self.axis = axis
  2225. def __infer__(self, x):
  2226. validator.check_subclass("x", x['dtype'], mstype.tensor, self.name)
  2227. x_shape = list(x['shape'])
  2228. dim = len(x_shape)
  2229. validator.check_int_range(self.axis, -dim, dim, Rel.INC_LEFT, 'axis value', self.name)
  2230. if self.axis < 0:
  2231. self.axis = self.axis + dim
  2232. output_num = x_shape[self.axis]
  2233. validator.check_value_type("num", output_num, [int], self.name)
  2234. validator.check_positive_int(output_num, "output_num", self.name)
  2235. self.add_prim_attr('num', output_num)
  2236. output_valid_check = x_shape[self.axis] - output_num
  2237. validator.check_int(output_valid_check, 0, Rel.EQ,
  2238. "The dimension which to unstack divides output_num", self.name)
  2239. out_shapes = []
  2240. out_dtypes = []
  2241. out_shape = x_shape[:self.axis] + x_shape[self.axis + 1:]
  2242. for _ in range(output_num):
  2243. out_shapes.append(tuple(out_shape))
  2244. out_dtypes.append(x['dtype'])
  2245. out_shapes = tuple(out_shapes)
  2246. out_dtypes = tuple(out_dtypes)
  2247. out = {'shape': out_shapes,
  2248. 'dtype': out_dtypes,
  2249. 'value': None}
  2250. return out
  2251. class Unstack(PrimitiveWithInfer):
  2252. r"""
  2253. Unstacks tensor in specified axis.
  2254. Unstacks a tensor of rank `R` along axis dimension, output tensors will have rank `(R-1)`.
  2255. Given a tensor of shape :math:`(x_1, x_2, ..., x_R)`. If :math:`0 \le axis`,
  2256. the shape of tensor in output is :math:`(x_1, x_2, ..., x_{axis}, x_{axis+2}, ..., x_R)`.
  2257. This is the opposite of pack.
  2258. Args:
  2259. axis (int): Dimension along which to pack. Default: 0.
  2260. Negative values wrap around. The range is [-R, R).
  2261. Inputs:
  2262. - **input_x** (Tensor) - The shape is :math:`(x_1, x_2, ..., x_R)`.
  2263. A tensor to be unstacked and the rank of the tensor must be greater than 0.
  2264. Outputs:
  2265. A tuple of tensors, the shape of each objects is the same.
  2266. Raises:
  2267. ValueError: If axis is out of the range [-len(input_x.shape), len(input_x.shape)).
  2268. Supported Platforms:
  2269. ``Ascend`` ``GPU`` ``CPU``
  2270. Examples:
  2271. >>> unstack = ops.Unstack()
  2272. >>> input_x = Tensor(np.array([[1, 1, 1, 1], [2, 2, 2, 2]]))
  2273. >>> output = unstack(input_x)
  2274. >>> print(output)
  2275. (Tensor(shape=[4], dtype=Int64, value= [1, 1, 1, 1]), Tensor(shape=[4], dtype=Int64, value= [2, 2, 2, 2]))
  2276. """
  2277. @prim_attr_register
  2278. def __init__(self, axis=0):
  2279. """Initialize Unstack"""
  2280. validator.check_value_type("axis", axis, [int], self.name)
  2281. self.axis = axis
  2282. def __infer__(self, x):
  2283. validator.check_subclass("x", x['dtype'], mstype.tensor, self.name)
  2284. x_shape = list(x['shape'])
  2285. dim = len(x_shape)
  2286. validator.check_int_range(self.axis, -dim, dim, Rel.INC_LEFT, 'axis value', self.name)
  2287. if self.axis < 0:
  2288. self.axis = self.axis + dim
  2289. output_num = x_shape[self.axis]
  2290. validator.check_value_type("num", output_num, [int], self.name)
  2291. validator.check_positive_int(output_num, "output_num", self.name)
  2292. self.add_prim_attr('num', output_num)
  2293. output_valid_check = x_shape[self.axis] - output_num
  2294. validator.check_int(output_valid_check, 0, Rel.EQ,
  2295. "The dimension which to unstack divides output_num", self.name)
  2296. out_shapes = []
  2297. out_dtypes = []
  2298. out_shape = x_shape[:self.axis] + x_shape[self.axis + 1:]
  2299. for _ in range(output_num):
  2300. out_shapes.append(tuple(out_shape))
  2301. out_dtypes.append(x['dtype'])
  2302. out_shapes = tuple(out_shapes)
  2303. out_dtypes = tuple(out_dtypes)
  2304. out = {'shape': out_shapes,
  2305. 'dtype': out_dtypes,
  2306. 'value': None}
  2307. return out
  2308. class Slice(PrimitiveWithInfer):
  2309. """
  2310. Slices a tensor in the specified shape.
  2311. Slice the tensor `input_x` in shape of `size` and starting at the location specified by `begin`,
  2312. The slice `begin` represents the offset in each dimension of `input_x`,
  2313. The slice `size` represents the size of the output tensor.
  2314. Note that `begin` is zero-based and `size` is one-based.
  2315. If `size[i]` is -1, all remaining elements in dimension i are included in the slice.
  2316. This is equivalent to setting :math:`size[i] = input_x.shape(i) - begin[i]`.
  2317. Inputs:
  2318. - **input_x** (Tensor): The target tensor.
  2319. The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
  2320. - **begin** (Union[tuple, list]): The beginning of the slice. Only constant value(>=0) is allowed.
  2321. - **size** (Union[tuple, list]): The size of the slice. Only constant value is allowed.
  2322. Outputs:
  2323. Tensor, the shape is : input `size`, the data type is the same as `input_x`.
  2324. Raises:
  2325. TypeError: If `begin` or `size` is neither tuple nor list.
  2326. Supported Platforms:
  2327. ``Ascend`` ``GPU`` ``CPU``
  2328. Examples:
  2329. >>> data = Tensor(np.array([[[1, 1, 1], [2, 2, 2]],
  2330. ... [[3, 3, 3], [4, 4, 4]],
  2331. ... [[5, 5, 5], [6, 6, 6]]]).astype(np.int32))
  2332. >>> slice_op = ops.Slice()
  2333. >>> output = slice_op(data, (1, 0, 0), (1, 1, 3))
  2334. >>> print(output)
  2335. [[[3 3 3]]]
  2336. >>> output = slice_op(data, (1, 0, 0), (1, 1, 2))
  2337. >>> print(output)
  2338. [[[3 3]]]
  2339. >>> output = slice_op(data, (1, 0, 0), (1, 1, 1))
  2340. >>> print(output)
  2341. [[[3]]]
  2342. >>> output = slice_op(data, (1, 1, 0), (1, 1, 3))
  2343. >>> print(output)
  2344. [[[4 4 4]]]
  2345. >>> output = slice_op(data, (1, 0, 1), (1, 1, 2))
  2346. >>> print(output)
  2347. [[[3 3]]]
  2348. """
  2349. @prim_attr_register
  2350. def __init__(self):
  2351. """Initialize slice"""
  2352. self.init_prim_io_names(inputs=['x', 'begin', 'size'], outputs=['output'])
  2353. def __infer__(self, x, begin, size):
  2354. x_shape = x['shape']
  2355. x_shp_len = len(x_shape)
  2356. begin_v, size_v = begin['value'], size['value']
  2357. if begin_v is None or size_v is None:
  2358. # if size_v is not None and begin_v is None, it should be also a dynamic output shape.
  2359. if size_v is None:
  2360. if size['shape'][0] < 0:
  2361. raise ValueError(f"For '{self.name}', the size shape haven't support dynamic yet.")
  2362. out_shape = [-1] * size['shape'][0]
  2363. else:
  2364. out_shape = [-1] * len(size_v)
  2365. if 'max_shape' in x:
  2366. max_shape = x['max_shape']
  2367. min_shape = x['min_shape']
  2368. else:
  2369. min_shape = x['shape']
  2370. max_shape = x['shape']
  2371. return {'shape': out_shape,
  2372. 'dtype': x['dtype'],
  2373. 'value': None,
  2374. 'min_shape': min_shape,
  2375. 'max_shape': max_shape}
  2376. validator.check_valid_input('begin', begin['value'], self.name)
  2377. validator.check_valid_input('size', size['value'], self.name)
  2378. validator.check_value_type("input begin", begin_v, [tuple, list], self.name)
  2379. validator.check_value_type("input size", size_v, [tuple, list], self.name)
  2380. for key, value in zip(('begin', 'size'), (begin_v, size_v)):
  2381. validator.check(f'len of {key}', len(value),
  2382. 'len x\'s dim', x_shp_len)
  2383. size_v = list(size_v)
  2384. if -1 not in x_shape:
  2385. for i in range(x_shp_len):
  2386. if size_v[i] == -1:
  2387. size_v[i] = x_shape[i] - begin_v[i]
  2388. validator.check_positive_int(size_v[i], f'input size[{i}]')
  2389. validator.check_non_negative_int(begin_v[i], f'input begin[{i}]')
  2390. if x_shape[i] < begin_v[i] + size_v[i]:
  2391. y = begin_v[i] + size_v[i]
  2392. raise ValueError(f"For '{self.name}', the sliced shape can not be greater than origin shape, "
  2393. f"but got sliced shape is {y}, and origin shape is {x_shape}.")
  2394. return {'shape': size_v,
  2395. 'dtype': x['dtype'],
  2396. 'value': None}
  2397. class ReverseV2(PrimitiveWithInfer):
  2398. """
  2399. Reverses specific dimensions of a tensor.
  2400. .. warning::
  2401. The value range of "axis" is [-dims, dims - 1]. "dims" is the dimension length of "input_x".
  2402. Args:
  2403. axis (Union[tuple(int), list(int)): The indices of the dimensions to reverse.
  2404. Inputs:
  2405. - **input_x** (Tensor) - The target tensor. The data type is Number except float64.
  2406. The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
  2407. Outputs:
  2408. Tensor, has the same shape and type as `input_x`.
  2409. Raises:
  2410. TypeError: If `axis` is neither list nor tuple.
  2411. TypeError: If element of `axis` is not an int.
  2412. Supported Platforms:
  2413. ``Ascend`` ``GPU``
  2414. Examples:
  2415. >>> input_x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.int32)
  2416. >>> op = ops.ReverseV2(axis=[1])
  2417. >>> output = op(input_x)
  2418. >>> print(output)
  2419. [[4 3 2 1]
  2420. [8 7 6 5]]
  2421. >>> op = ops.ReverseV2(axis=[1, 0])
  2422. >>> output = op(input_x)
  2423. >>> print(output)
  2424. [[8 7 6 5]
  2425. [4 3 2 1]]
  2426. """
  2427. @prim_attr_register
  2428. def __init__(self, axis):
  2429. """Initialize ReverseV2."""
  2430. validator.check_value_type('axis', axis, [list, tuple], self.name)
  2431. for i, each in enumerate(axis):
  2432. validator.check_value_type(f'axis[{i}]', each, [int], self.name)
  2433. self.axis = axis
  2434. self.init_prim_io_names(inputs=['x'], outputs=['output'])
  2435. def infer_shape(self, x_shape):
  2436. dim = len(x_shape)
  2437. for i, each in enumerate(self.axis):
  2438. validator.check_int_range(each, -dim, dim, Rel.INC_LEFT, f'axis[{i}]', self.name)
  2439. normalized_axis = []
  2440. for i, v in enumerate(self.axis):
  2441. if v < 0:
  2442. normalized_axis.append(v + dim)
  2443. else:
  2444. normalized_axis.append(v)
  2445. if len(normalized_axis) != len(set(normalized_axis)):
  2446. duplicated = [item for item, count in Counter(normalized_axis).items() if count > 1]
  2447. raise ValueError(f"For '{self.name}', the 'axis' cannot contain duplicate dimensions,"
  2448. f" but got duplicated elements {duplicated}.")
  2449. return x_shape
  2450. def infer_dtype(self, x_dtype):
  2451. validator.check_tensor_dtype_valid('x', x_dtype, (mstype.bool_,) + mstype.number_type, self.name)
  2452. return x_dtype
  2453. class Rint(PrimitiveWithInfer):
  2454. """
  2455. Returns an integer that is closest to x element-wise.
  2456. Inputs:
  2457. - **input_x** (Tensor) - The target tensor, which must be one of the following types:
  2458. float16, float32. The shape is :math:`(N,*)` where :math:`*` means, any number of additional dimensions.
  2459. Outputs:
  2460. Tensor, has the same shape and type as `input_x`.
  2461. Raises:
  2462. TypeError: If dtype of `input_x` is not in [float16, float32, float64].
  2463. Supported Platforms:
  2464. ``Ascend`` ``GPU`` ``CPU``
  2465. Examples:
  2466. >>> input_x = Tensor(np.array([-1.6, -0.1, 1.5, 2.0]), mindspore.float32)
  2467. >>> op = ops.Rint()
  2468. >>> output = op(input_x)
  2469. >>> print(output)
  2470. [-2. 0. 2. 2.]
  2471. >>> input_x = Tensor(np.array([[-2.0, -1.9, -1.8, -1.7, -1.6],
  2472. ... [-2.0, -1.9, -1.8, -1.7, -1.6]]), mindspore.float32)
  2473. >>> output = op(input_x)
  2474. >>> print(output)
  2475. [[-2. -2. -2. -2. -2.]
  2476. [-2. -2. -2. -2. -2.]]
  2477. """
  2478. @prim_attr_register
  2479. def __init__(self):
  2480. """Initialize Rint."""
  2481. self.init_prim_io_names(inputs=['x'], outputs=['output'])
  2482. def infer_shape(self, x_shape):
  2483. return x_shape
  2484. def infer_dtype(self, x_dtype):
  2485. validator.check_tensor_dtype_valid('x', x_dtype, [mstype.float16, mstype.float32, mstype.float64], self.name)
  2486. return x_dtype
  2487. class Select(Primitive):
  2488. r"""
  2489. Returns the selected elements, either from input :math:`x` or input :math:`y`, depending on the `condition`.
  2490. Given a tensor as input, this operation inserts a dimension of 1 at the dimension,
  2491. it was invalid when both math: 'x' and math: 'y' are none.
  2492. Keep in mind that the shape of the output tensor can vary depending
  2493. on how many true values are in the input. Indexes are output in row-first
  2494. order.
  2495. The conditional tensor acts as an optional compensation (mask), which
  2496. determines whether the corresponding element / row in the output must be
  2497. selected from :math:`x` (if true) or :math:`y` (if false) based on the value of each
  2498. element.
  2499. It can be defined as:
  2500. .. math::
  2501. out_i = \begin{cases}
  2502. x_i, & \text{if } condition_i \\
  2503. y_i, & \text{otherwise}
  2504. \end{cases}
  2505. If condition is a vector, then :math:`x` and :math:`y` are higher-dimensional matrices, then it
  2506. chooses to copy that row (external dimensions) from :math:`x` and :math:`y`. If condition has
  2507. the same shape as :math:`x` and :math:`y`, you can choose to copy these elements from :math:`x`
  2508. and :math:`y`.
  2509. Inputs:
  2510. - **input_cond** (Tensor[bool]) - The shape is :math:`(x_1, x_2, ..., x_N, ..., x_R)`.
  2511. The condition tensor, decides which element is chosen.
  2512. - **input_x** (Tensor) - The shape is :math:`(x_1, x_2, ..., x_N, ..., x_R)`.
  2513. The first input tensor.
  2514. - **input_y** (Tensor) - The shape is :math:`(x_1, x_2, ..., x_N, ..., x_R)`.
  2515. The second input tensor.
  2516. Outputs:
  2517. Tensor, has the same shape as `input_x`. The shape is :math:`(x_1, x_2, ..., x_N, ..., x_R)`.
  2518. Raises:
  2519. TypeError: If `input_x` or `input_y` is not a Tensor.
  2520. ValueError: If shape of `input_x` is not equal to shape of `input_y` or shape of `input_cond`.
  2521. Supported Platforms:
  2522. ``Ascend`` ``GPU`` ``CPU``
  2523. Examples:
  2524. >>> select = ops.Select()
  2525. >>> input_cond = Tensor([True, False])
  2526. >>> input_x = Tensor([2,3], mindspore.float32)
  2527. >>> input_y = Tensor([1,2], mindspore.float32)
  2528. >>> output = select(input_cond, input_x, input_y)
  2529. >>> print(output)
  2530. [2. 2.]
  2531. """
  2532. @prim_attr_register
  2533. def __init__(self):
  2534. """Initialize Select."""
  2535. self.init_prim_io_names(inputs=['condition', 'x', 'y'], outputs=['output'])
  2536. def _compute_slicing_length(begin, end, stride, x_shape, i):
  2537. """Computes the length of the slicing."""
  2538. if i >= len(x_shape):
  2539. raise ValueError(f"For 'StridedSlice', the index must be less than "
  2540. f"the dimension of 'input_x', but got the dimension of 'input_x': {len(x_shape)} "
  2541. f"and the index: {i}.")
  2542. x_dim = x_shape[i]
  2543. if stride > 0:
  2544. # When slicing forward, convert begin and end to positive numbers.
  2545. if begin >= x_dim or end < -x_dim:
  2546. # When slicing forward, if begin >= x_dim or end < -x_dim, the length of the slicing is 0.
  2547. slicing_length = 0
  2548. else:
  2549. if -x_dim <= begin < 0:
  2550. begin += x_dim
  2551. if begin < -x_dim:
  2552. # When slicing forward, if begin < -x_dim, set begin = 0, which means start from the 0th element.
  2553. begin = 0
  2554. if -x_dim <= end < 0:
  2555. end += x_dim
  2556. if end > x_dim:
  2557. # When slicing forward, if end > x_dim, set end = x_dims, which means slice to the last element.
  2558. end = x_dim
  2559. if begin >= end:
  2560. # When slicing forward, if begin >= end, the length of the slicing is 0.
  2561. slicing_length = 0
  2562. else:
  2563. slicing_length = 1 + (end - 1 - begin) // stride
  2564. else:
  2565. # When slicing backward, convert begin and end to negative numbers.
  2566. if begin < -x_dim or end >= x_dim:
  2567. # When slicing backward, if begin < -x_dim or end >= x_dim, the length of the slicing is 0.
  2568. slicing_length = 0
  2569. else:
  2570. if 0 <= begin < x_dim:
  2571. begin += -x_dim
  2572. if begin >= x_dim:
  2573. begin = -1
  2574. if 0 <= end < x_dim:
  2575. end += -x_dim
  2576. if end < -x_dim - 1:
  2577. # Slicing to the 0th element.
  2578. end = -x_dim - 1
  2579. if begin <= end:
  2580. slicing_length = 0
  2581. else:
  2582. slicing_length = 1 + (end + 1 - begin) // stride
  2583. return slicing_length
  2584. class StridedSlice(PrimitiveWithInfer):
  2585. r"""
  2586. Extracts a strided slice of a tensor.
  2587. Given an input tensor, this operation inserts a dimension of length 1 at the dimension.
  2588. This operation extracts a fragment of size (end-begin)/stride from the given 'input_tensor'.
  2589. Starting from the beginning position, the fragment continues adding stride to the index until
  2590. all dimensions are not less than the ending position.
  2591. Given a `input_x[m1, m2, ..., mn]`, `begin`, `end` and `strides` will be vectors of length n.
  2592. In each mask field (`begin_mask`, `end_mask`, `ellipsis_mask`, `new_axis_mask`, `shrink_axis_mask`)
  2593. the ith bit will correspond to the ith m.
  2594. If the ith bit of `begin_mask` is set, `begin[i]` is ignored and the fullest possible range in that dimension
  2595. is used instead. `end_mask` is analogous, except with the end range.
  2596. As for a 5*6*7 tensor, `x[2:,:3,:]` is equivalent to `x[2:5,0:3,0:7]`.
  2597. If the ith bit of `ellipsis_mask` is set, as many unspecified dimensions as needed will be inserted between
  2598. other dimensions. Only one non-zero bit is allowed in `ellipsis_mask`.
  2599. As for a 5*6*7*8 tensor, `x[2:,...,:6]` is equivalent to `x[2:5,:,:,0:6]`.
  2600. `x[2:,...]` is equivalent to `x[2:5,:,:,:]`.
  2601. If the ith bit of `new_axis_mask` is set, `begin`, `end` and `strides` are ignored and a new length 1
  2602. dimension is added at the specified position in tthe output tensor.
  2603. As for a 5*6*7 tensor, `x[:2, newaxis, :6]` will produce a tensor with shape (2, 1, 7).
  2604. If the ith bit of `shrink_axis_mask` is set, ith size shrinks the dimension by 1, taking on the value
  2605. at index `begin[i]`, `end[i]` and `strides[i]` are ignored.
  2606. As for a 5*6*7 tensor, `x[:, 5, :]` will result in `shrink_axis_mask` equal to 4.
  2607. Note:
  2608. The stride may be negative value, which causes reverse slicing.
  2609. The shape of `begin`, `end` and `strides` must be the same.
  2610. `begin` and `end` are zero-indexed. The element of `strides` must be non-zero.
  2611. Args:
  2612. begin_mask (int): Starting index of the slice. Default: 0.
  2613. end_mask (int): Ending index of the slice. Default: 0.
  2614. ellipsis_mask (int): An int mask. Default: 0.
  2615. new_axis_mask (int): An int mask. Default: 0.
  2616. shrink_axis_mask (int): An int mask. Default: 0.
  2617. Inputs:
  2618. - **input_x** (Tensor) - The input Tensor.
  2619. - **begin** (tuple[int]) - A tuple which represents the location where to start. Only
  2620. constant value is allowed.
  2621. - **end** (tuple[int]) - A tuple or which represents the maximum location where to end.
  2622. Only constant value is allowed.
  2623. - **strides** (tuple[int]) - A tuple which represents the stride is continuously added
  2624. before reaching the maximum location. Only constant value is allowed.
  2625. Outputs:
  2626. Tensor, The output is explained by following example.
  2627. In the 0th dimension, begin is 1, end is 2, and strides is 1,
  2628. because :math:`1+1=2\geq2`, the interval is :math:`[1,2)`.
  2629. Thus, return the element with :math:`index = 1` in 0th dimension, i.e., [[3, 3, 3], [4, 4, 4]].
  2630. In the 1st dimension, similarly, the interval is :math:`[0,1)`.
  2631. Based on the return value of the 0th dimension, return the element with :math:`index = 0`,
  2632. i.e., [3, 3, 3].
  2633. In the 2nd dimension, similarly, the interval is :math:`[0,3)`.
  2634. Based on the return value of the 1st dimension, return the element with :math:`index = 0,1,2`,
  2635. i.e., [3, 3, 3].
  2636. Finally, the output is [3, 3, 3].
  2637. Raises:
  2638. TypeError: If `begin_mask`, `end_mask`, `ellipsis_mask`, `new_axis_mask` or `shrink_axis_mask` is not an int.
  2639. TypeError: If `begin`, `end` or `strides` is not a tuple.
  2640. ValueError: If `begin_mask`, `end_mask`, `ellipsis_mask`, `new_axis_mask` or `shrink_axis_mask` is less than 0.
  2641. Supported Platforms:
  2642. ``Ascend`` ``GPU`` ``CPU``
  2643. Examples:
  2644. >>> input_x = Tensor([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]],
  2645. ... [[5, 5, 5], [6, 6, 6]]], mindspore.float32)
  2646. >>> # [[[1. 1. 1.]
  2647. >>> # [2. 2. 2.]]
  2648. >>> #
  2649. >>> # [[3. 3. 3.]
  2650. >>> # [4. 4. 4.]]
  2651. >>> #
  2652. >>> # [[5. 5. 5.]
  2653. >>> # [6. 6. 6.]]]
  2654. >>> # In order to visually view the multi-dimensional array, write the above as follows:
  2655. >>> # [
  2656. >>> # [
  2657. >>> # [1,1,1]
  2658. >>> # [2,2,2]
  2659. >>> # ]
  2660. >>> # [
  2661. >>> # [3,3,3]
  2662. >>> # [4,4,4]
  2663. >>> # ]
  2664. >>> # [
  2665. >>> # [5,5,5]
  2666. >>> # [6,6,6]
  2667. >>> # ]
  2668. >>> # ]
  2669. >>> strided_slice = ops.StridedSlice()
  2670. >>> output = strided_slice(input_x, (1, 0, 2), (3, 1, 3), (1, 1, 1))
  2671. >>> # Take this " output = strided_slice(input_x, (1, 0, 2), (3, 1, 3), (1, 1, 1)) " as an example,
  2672. >>> # start = [1, 0, 2] , end = [3, 1, 3], stride = [1, 1, 1], Find a segment of (start, end),
  2673. >>> # note that end is an open interval
  2674. >>> # To facilitate understanding, this operator can be divided into three steps:
  2675. >>> # Step 1: Calculation of the first dimension:
  2676. >>> # start = 1, end = 3, stride = 1, So can take 1st, 2nd rows, and then gets the final output at this time.
  2677. >>> # output_1th =
  2678. >>> # [
  2679. >>> # [
  2680. >>> # [3,3,3]
  2681. >>> # [4,4,4]
  2682. >>> # ]
  2683. >>> # [
  2684. >>> # [5,5,5]
  2685. >>> # [6,6,6]
  2686. >>> # ]
  2687. >>> # ]
  2688. >>> # Step 2: Calculation of the second dimension
  2689. >>> # 2nd dimension, start = 0, end = 1, stride = 1. So only 0th rows can be taken, and the output at this time.
  2690. >>> # output_2nd =
  2691. >>> # [
  2692. >>> # [
  2693. >>> # [3,3,3]
  2694. >>> # ]
  2695. >>> # [
  2696. >>> # [5,5,5]
  2697. >>> # ]
  2698. >>> # ]
  2699. >>> # Step 3: Calculation of the third dimension
  2700. >>> # 3nd dimension,start = 2, end = 3, stride = 1, So can take 2th cols,
  2701. >>> # and you get the final output at this time.
  2702. >>> # output_3ed =
  2703. >>> # [
  2704. >>> # [
  2705. >>> # [3]
  2706. >>> # ]
  2707. >>> # [
  2708. >>> # [5]
  2709. >>> # ]
  2710. >>> # ]
  2711. >>> # The final output after finishing is:
  2712. >>> print(output)
  2713. [[[3.]]
  2714. [[5.]]]
  2715. >>> # another example like :
  2716. >>> output = strided_slice(input_x, (1, 0, 0), (2, 1, 3), (1, 1, 1))
  2717. >>> print(output)
  2718. [[[3. 3. 3.]]]
  2719. """
  2720. @prim_attr_register
  2721. def __init__(self,
  2722. begin_mask=0,
  2723. end_mask=0,
  2724. ellipsis_mask=0,
  2725. new_axis_mask=0,
  2726. shrink_axis_mask=0):
  2727. """Initialize StridedSlice"""
  2728. self.init_prim_io_names(inputs=['x', 'begin', 'end', 'strides'], outputs=['output'])
  2729. validator.check_non_negative_int(begin_mask, 'begin_mask', self.name)
  2730. validator.check_non_negative_int(end_mask, 'end_mask', self.name)
  2731. validator.check_non_negative_int(ellipsis_mask, 'ellipsis_mask', self.name)
  2732. if len(tuple(filter(lambda x: x == '1', bin(ellipsis_mask)[-1:1:-1]))) > 1:
  2733. raise ValueError(f"For '{self.name}', only support one ellipsis in the index, but got {end_mask}.")
  2734. validator.check_non_negative_int(new_axis_mask, 'new_axis_mask', self.name)
  2735. validator.check_non_negative_int(shrink_axis_mask, 'shrink_axis_mask', self.name)
  2736. def _check_and_get_value(self, slice_input, name):
  2737. """Check begin, end, strides. Get its length and value."""
  2738. slice_value = slice_input['value']
  2739. if slice_value is None:
  2740. validator.check_tensor_dtype_valid(name, slice_input['dtype'], [mstype.int64], self.name)
  2741. slice_shape = slice_input['shape']
  2742. if len(slice_shape) != 1:
  2743. raise ValueError(f"For '{self.name}', both the 'begins', 'ends', and 'strides' must be 1-D, "
  2744. f"but got '{name}' shape: {slice_shape}.")
  2745. # not support scalar
  2746. return slice_value, slice_shape[0]
  2747. if isinstance(slice_value, Tensor_):
  2748. validator.check_tensor_dtype_valid(name, slice_input['dtype'], [mstype.int64], self.name)
  2749. slice_value = slice_value.asnumpy().tolist()
  2750. elif not isinstance(slice_value, tuple):
  2751. raise TypeError(f"For '{self.name}', both the 'begin', 'end', and 'strides' must be a tuple or Tensor, "
  2752. f"but got '{name}': {slice_value}.")
  2753. if tuple(filter(lambda x: not isinstance(x, int), slice_value)):
  2754. raise TypeError(f"For '{self.name}', the elements of 'begin', 'end', and 'strides' must be int, "
  2755. f"but got {name}: {slice_value}.")
  2756. return slice_value, len(slice_value)
  2757. def __infer__(self, x, begin, end, strides):
  2758. x_shape = x['shape']
  2759. if -1 in x_shape:
  2760. raise ValueError(f"For '{self.name}', input x is currently not support dynamic shape.")
  2761. begin_v, begin_len = self._check_and_get_value(begin, 'begin')
  2762. end_v, end_len = self._check_and_get_value(end, 'end')
  2763. strides_v, strides_len = self._check_and_get_value(strides, 'strides')
  2764. if strides_v is not None and tuple(filter(lambda x: x == 0, strides_v)):
  2765. raise ValueError(f"For '{self.name}', the 'strides' cannot contain 0, but got 'strides': {strides_v}.")
  2766. if begin_len != strides_len or end_len != strides_len:
  2767. raise ValueError(f"For '{self.name}', 'begin', 'end' and 'strides' must be the same length, but got "
  2768. f"'begin' length: {begin_len}, 'end' length: {end_len}, 'strides' length: {strides_len}.")
  2769. if None in (strides_v, begin_v, end_v):
  2770. ret_shape = self._compute_dynamic_slicing_shape(x_shape, begin_len)
  2771. ret_min_shape = [1] * len(x_shape)
  2772. ret_max_shape = x_shape
  2773. for i, val in enumerate(ret_shape):
  2774. if val > 0:
  2775. ret_min_shape[i] = val
  2776. ret_max_shape[i] = val
  2777. return {'shape': ret_shape,
  2778. 'dtype': x['dtype'],
  2779. 'value': None,
  2780. 'max_shape': ret_max_shape,
  2781. 'min_shape': ret_min_shape}
  2782. ret_shape = self._compute_slicing_shape(x_shape, begin_v, end_v, strides_v)
  2783. if all(ret_shape):
  2784. value = None
  2785. else:
  2786. init_func = Zero()
  2787. init_func.__enable_zero_dim__ = True
  2788. value = Tensor(dtype=x['dtype'].element_type(), shape=ret_shape, init=init_func)
  2789. if "max_value" in x and "min_value" in x:
  2790. validator.check_value_type("min_value", x["min_value"], [tuple, list], self.name)
  2791. validator.check_value_type("max_value", x["max_value"], [tuple, list], self.name)
  2792. max_value_np = np.array(x["max_value"])
  2793. min_value_np = np.array(x["min_value"])
  2794. slice_index = []
  2795. for begin_i, end_i, strides_i in zip(begin_v, end_v, strides_v):
  2796. s = slice(begin_i, end_i, strides_i)
  2797. slice_index.append(s)
  2798. slice_index = tuple(slice_index)
  2799. max_value_slice = max_value_np[slice_index]
  2800. min_value_slice = min_value_np[slice_index]
  2801. max_value_slice = tuple(max_value_slice.tolist())
  2802. min_value_slice = tuple(min_value_slice.tolist())
  2803. return {'shape': ret_shape,
  2804. 'dtype': x['dtype'],
  2805. 'value': value,
  2806. 'max_value': max_value_slice,
  2807. 'min_value': min_value_slice}
  2808. return {'shape': ret_shape,
  2809. 'dtype': x['dtype'],
  2810. 'value': value}
  2811. def _compute_slicing_shape(self, x_shape, begin_v, end_v, strides_v):
  2812. """Computes the shape of the slicing."""
  2813. x_rank = len(x_shape)
  2814. slice_len = len(begin_v)
  2815. # After the integer is converted to binary, it is a str and the first two chars are the flag char '0b'.
  2816. begin_pos = bin(self.begin_mask)[-1:1:-1]
  2817. end_pos = bin(self.end_mask)[-1:1:-1]
  2818. ellipsis_pos = bin(self.ellipsis_mask)[-1:1:-1]
  2819. new_axis_pos = bin(self.new_axis_mask)[-1:1:-1]
  2820. shrink_axis_pos = bin(self.shrink_axis_mask)[-1:1:-1]
  2821. ret_shape = []
  2822. i, j = 0, 0
  2823. has_ellipsis = False
  2824. while i < x_rank or j < slice_len:
  2825. if j < slice_len:
  2826. begin, end, stride = begin_v[j], end_v[j], strides_v[j]
  2827. if j < len(ellipsis_pos) and ellipsis_pos[j] == '1':
  2828. # When there is ellipsis, the latter part of the ellipsis will be processed separately.
  2829. has_ellipsis = True
  2830. break
  2831. if j < len(begin_pos) and begin_pos[j] == '1':
  2832. begin = -1 if strides_v[j] < 0 else 0
  2833. if j < len(end_pos) and end_pos[j] == '1':
  2834. end = -(x_shape[i] + 1) if strides_v[j] < 0 else x_shape[i]
  2835. if j < len(new_axis_pos) and new_axis_pos[j] == '1':
  2836. ret_shape.append(1)
  2837. j += 1
  2838. continue
  2839. if j < len(shrink_axis_pos) and shrink_axis_pos[j] == '1':
  2840. if (not -x_shape[i] <= begin < x_shape[i]) or stride < 0:
  2841. raise IndexError(f"For '{self.name}', the 'strides[{i}]' cannot be negative number and "
  2842. f"'begin[{i}]' should be in [-{x_shape[i]}, {x_shape[i]}) "
  2843. f"when 'shrink_axis_mask' is greater than 0, "
  2844. f"but got 'shrink_axis_mask': {self.shrink_axis_mask}, "
  2845. f"'strides[{i}]': {stride}, 'begin[{i}]': {begin}.")
  2846. j += 1
  2847. i += 1
  2848. continue
  2849. else:
  2850. begin, end, stride = 0, x_shape[i], 1
  2851. slicing_length = _compute_slicing_length(begin, end, stride, x_shape, i)
  2852. ret_shape.append(slicing_length)
  2853. i += 1
  2854. j += 1
  2855. if has_ellipsis:
  2856. # When there is ellipsis, handle the second half of the ellipsis split.
  2857. ellipsis_occupied_dims = x_rank - i - (slice_len - (j + 1)) + \
  2858. len(tuple(filter(lambda x: x == '1', new_axis_pos[j + 1:slice_len])))
  2859. ret_shape.extend(x_shape[i:i + ellipsis_occupied_dims])
  2860. j += 1
  2861. i += ellipsis_occupied_dims
  2862. while i < x_rank or j < slice_len:
  2863. begin, end, stride = begin_v[j], end_v[j], strides_v[j]
  2864. if j < len(begin_pos) and begin_pos[j] == '1':
  2865. begin = -1 if strides_v[j] < 0 else 0
  2866. if j < len(end_pos) and end_pos[j] == '1':
  2867. end = -(x_shape[i] + 1) if strides_v[j] < 0 else x_shape[i]
  2868. if j < len(new_axis_pos) and new_axis_pos[j] == '1':
  2869. ret_shape.append(1)
  2870. j += 1
  2871. continue
  2872. if j < len(shrink_axis_pos) and shrink_axis_pos[j] == '1':
  2873. if (not -x_shape[i] <= begin < x_shape[i]) or stride < 0:
  2874. raise IndexError(f"For '{self.name}', the 'strides[{i}]' cannot be negative number and "
  2875. f"'begin[{i}]' should be in [-{x_shape[i]}, {x_shape[i]}) "
  2876. f"when 'shrink_axis_mask' is greater than 0, "
  2877. f"but got 'shrink_axis_mask': {self.shrink_axis_mask}, "
  2878. f"'strides[{i}]': {stride}, 'begin[{i}]': {begin}.")
  2879. j += 1
  2880. i += 1
  2881. continue
  2882. slicing_length = _compute_slicing_length(begin, end, stride, x_shape, i)
  2883. ret_shape.append(slicing_length)
  2884. i += 1
  2885. j += 1
  2886. return ret_shape
  2887. def _compute_dynamic_slicing_shape(self, x_shape, slice_len):
  2888. """Computes the shape of the slicing for dynamic shape, mask is currently not supported."""
  2889. x_rank = len(x_shape)
  2890. if self.begin_mask != 0 or self.end_mask != 0 or self.ellipsis_mask or self.new_axis_mask != 0 \
  2891. or self.shrink_axis_mask != 0:
  2892. raise ValueError("Mask is currently not supported if 'begin', 'end' or 'strides' is not a constant.")
  2893. ret_shape = []
  2894. i, j = 0, 0
  2895. while i < x_rank or j < slice_len:
  2896. slicing_length = -1
  2897. if j >= slice_len:
  2898. if i >= len(x_shape):
  2899. raise ValueError(f"For 'StridedSlice', the index must be less than or equal to "
  2900. f"the dimension of 'input_x', but got the dimension of 'input_x': {len(x_shape)} "
  2901. f"and the index: {i}.")
  2902. begin, end, stride = 0, x_shape[i], 1
  2903. if end > 0:
  2904. slicing_length = _compute_slicing_length(begin, end, stride, x_shape, i)
  2905. ret_shape.append(slicing_length)
  2906. i += 1
  2907. j += 1
  2908. return ret_shape
  2909. class Diag(PrimitiveWithInfer):
  2910. r"""
  2911. Constructs a diagonal tensor with a given diagonal values.
  2912. Assume `input_x` has dimensions :math:`[D_1,... D_k]`, the output is a tensor of
  2913. rank 2k with dimensions :math:`[D_1,..., D_k, D_1,..., D_k]` where:
  2914. :math:`output[i_1,..., i_k, i_1,..., i_k] = input_x[i_1,..., i_k]` and 0 everywhere else.
  2915. Inputs:
  2916. - **input_x** (Tensor) - The input tensor. The input shape must be less than 5d.
  2917. Outputs:
  2918. Tensor, has the same dtype as the `input_x`.
  2919. Raises:
  2920. TypeError: If `input_x` is not a Tensor.
  2921. ValueError: If rank of `input_x` is less than 1.
  2922. Supported Platforms:
  2923. ``Ascend``
  2924. Examples:
  2925. >>> input_x = Tensor([1, 2, 3, 4])
  2926. >>> diag = ops.Diag()
  2927. >>> output = diag(input_x)
  2928. >>> print(output)
  2929. [[1, 0, 0, 0],
  2930. [0, 2, 0, 0],
  2931. [0, 0, 3, 0],
  2932. [0, 0, 0, 4]]
  2933. """
  2934. @prim_attr_register
  2935. def __init__(self):
  2936. """Initialize Diag"""
  2937. def infer_dtype(self, x_type):
  2938. validator.check_subclass('input_x', x_type, mstype.tensor, self.name)
  2939. return x_type
  2940. def infer_shape(self, x_shape):
  2941. validator.check("x rank", len(x_shape), "", 1, Rel.GE)
  2942. ret_shape = copy.deepcopy(x_shape)
  2943. ret_shape = ret_shape + ret_shape
  2944. return ret_shape
  2945. def infer_value(self, x):
  2946. if x is None:
  2947. return None
  2948. # do constant-folding only when x rank is 1
  2949. if len(x.shape) != 1:
  2950. return None
  2951. ret = np.diag(x.asnumpy())
  2952. return Tensor(ret)
  2953. class DiagPart(PrimitiveWithInfer):
  2954. r"""
  2955. Extracts the diagonal part from given tensor.
  2956. Assume input has dimensions :math:`[D_1,..., D_k, D_1,..., D_k]`, the output is a tensor
  2957. of rank k with dimensions :math:`[D_1,..., D_k]` where:
  2958. :math:`output[i_1,..., i_k] = input[i_1,..., i_k, i_1,..., i_k]`.
  2959. Inputs:
  2960. - **input_x** (Tensor) - The input tensor of rank 2k, k is not zero.
  2961. Outputs:
  2962. Tensor, the extracted diagonal has the same dtype as the `input_x`.
  2963. Raises:
  2964. TypeError: If `input_x` is not a Tensor.
  2965. ValueError: If rank of `input_x` is not even or zero.
  2966. ValueError: If input_shape[i] is not equal to input_shape[i + len(input_shape)/2].
  2967. Supported Platforms:
  2968. ``Ascend``
  2969. Examples
  2970. >>> input_x = Tensor([[1, 0, 0, 0],
  2971. ... [0, 2, 0, 0],
  2972. ... [0, 0, 3, 0],
  2973. ... [0, 0, 0, 4]])
  2974. >>> diag_part = ops.DiagPart()
  2975. >>> output = diag_part(input_x)
  2976. >>> print(output)
  2977. [1 2 3 4]
  2978. """
  2979. @prim_attr_register
  2980. def __init__(self):
  2981. """Initialize DiagPart"""
  2982. def infer_dtype(self, x_type):
  2983. validator.check_subclass('input_x', x_type, mstype.tensor, self.name)
  2984. return x_type
  2985. def infer_shape(self, x_shape):
  2986. if len(x_shape) % 2 != 0 or \
  2987. not x_shape:
  2988. raise ValueError(f"For \'{self.name}\', the dimension of 'input_x' must be non-zero and even, "
  2989. f"but got dimension {len(x_shape)}, with shapes {x_shape}.")
  2990. length = len(x_shape) // 2
  2991. for i in range(length):
  2992. validator.check('input_shape[i + len(input_shape)/2]', x_shape[i + length],
  2993. 'input_shape[i]', x_shape[i], Rel.EQ, self.name)
  2994. ret_shape = x_shape[0:length]
  2995. return ret_shape
  2996. def infer_value(self, x):
  2997. if x is None:
  2998. return None
  2999. # do constant-folding only when x rank is 2
  3000. if len(x.shape) != 2:
  3001. return None
  3002. ret = np.diag(x.asnumpy())
  3003. return Tensor(ret)
  3004. class Eye(PrimitiveWithInfer):
  3005. """
  3006. Creates a tensor with ones on the diagonal and zeros in the rest.
  3007. Inputs:
  3008. - **n** (int) - The number of rows of returned tensor. Constant value only.
  3009. - **m** (int) - The number of columns of returned tensor. Constant value only.
  3010. - **t** (mindspore.dtype) - MindSpore's dtype, The data type of the returned tensor.
  3011. The data type can be Number.
  3012. Outputs:
  3013. Tensor, a tensor with ones on the diagonal and the rest of elements are zero. The shape of `output` depends on
  3014. the user's Inputs `n` and `m`. And the data type depends on Inputs `t`.
  3015. Raises:
  3016. TypeError: If `m` or `n` is not an int.
  3017. ValueError: If `m` or `n` is less than 1.
  3018. Supported Platforms:
  3019. ``Ascend`` ``GPU`` ``CPU``
  3020. Examples:
  3021. >>> eye = ops.Eye()
  3022. >>> output = eye(2, 2, mindspore.int32)
  3023. >>> print(output)
  3024. [[1 0]
  3025. [0 1]]
  3026. >>> print(output.dtype)
  3027. Int32
  3028. >>> output = eye(1, 2, mindspore.float64)
  3029. >>> print(output)
  3030. [[1. 0.]]
  3031. >>> print(output.dtype)
  3032. Float64
  3033. >>> # if wants a anti-diagonal
  3034. >>> anti_diagonal_input = eye(2, 2, mindspore.int32)
  3035. >>> # Note that ReverseV2 only supports "Ascend" at this time
  3036. >>> reverse = ops.ReverseV2([1])
  3037. >>> anti_diagonal_output = reverse(anti_diagonal_input)
  3038. >>> print(anti_diagonal_output)
  3039. [[0 1]
  3040. [1 0]]
  3041. """
  3042. @prim_attr_register
  3043. def __init__(self):
  3044. """Initialize Eye"""
  3045. def infer_value(self, n, m, t):
  3046. validator.check_positive_int(n, "n", self.name)
  3047. validator.check_positive_int(m, "m", self.name)
  3048. args = {"dtype": t}
  3049. validator.check_types_same_and_valid(args, mstype.number_type + (mstype.bool_,), self.name)
  3050. np_type = mstype.dtype_to_nptype(t)
  3051. ret = np.eye(n, m, dtype=np_type)
  3052. return Tensor(ret)
  3053. class ScatterNd(PrimitiveWithInfer):
  3054. r"""
  3055. Scatters a tensor into a new tensor depending on the specified indices.
  3056. Creates an empty tensor with the given `shape`, and set values by scattering the update tensor
  3057. depending on indices.
  3058. The empty tensor has rank P and `indices` has rank Q where `Q >= 2`.
  3059. `indices` has shape :math:`(i_0, i_1, ..., i_{Q-2}, N)` where `N <= P`.
  3060. The last dimension of `indices` (with length `N` ) indicates slices along the `N` th dimension of the empty tensor.
  3061. `updates` is a tensor of rank `Q-1+P-N`. Its shape is: :math:`(i_0, i_1, ..., i_{Q-2}, shape_N, ..., shape_{P-1})`.
  3062. The following figure shows the calculation process of inserting two slices in the first dimension of a rank-3
  3063. with two matrices of new values:
  3064. .. image:: api_img/ScatterNd.png
  3065. Inputs:
  3066. - **indices** (Tensor) - The index of scattering in the new tensor with int32 or int64 data type.
  3067. The rank of indices must be at least 2 and `indices_shape[-1] <= len(shape)`.
  3068. - **updates** (Tensor) - The source Tensor to be scattered.
  3069. It has shape `indices_shape[:-1] + shape[indices_shape[-1]:]`.
  3070. - **shape** (tuple[int]) - Define the shape of the output tensor, has the same data type as indices.
  3071. The shape of `shape` is :math:`(x_1, x_2, ..., x_R)`, and length of 'shape' is greater than or equal 2.
  3072. In other words, the shape of `shape` is at least :math:`(x_1, x_2)`.
  3073. And the value of any element in `shape` must be greater than or equal 1.
  3074. In other words, :math:`x_1` >= 1, :math:`x_2` >= 1.
  3075. Outputs:
  3076. Tensor, the new tensor, has the same type as `update` and the same shape as `shape`.
  3077. Raises:
  3078. TypeError: If `shape` is not a tuple.
  3079. ValueError: If any element of `shape` is less than 1.
  3080. Supported Platforms:
  3081. ``Ascend`` ``GPU`` ``CPU``
  3082. Examples:
  3083. >>> op = ops.ScatterNd()
  3084. >>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
  3085. >>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2],
  3086. ... [3, 3, 3, 3], [4, 4, 4, 4]],
  3087. ... [[1, 1, 1, 1], [2, 2, 2, 2],
  3088. ... [3, 3, 3, 3], [4, 4, 4, 4]]]), mindspore.float32)
  3089. >>> shape = (4, 4, 4)
  3090. >>> output = op(indices, updates, shape)
  3091. >>> print(output)
  3092. [[[1. 1. 1. 1.]
  3093. [2. 2. 2. 2.]
  3094. [3. 3. 3. 3.]
  3095. [4. 4. 4. 4.]]
  3096. [[0. 0. 0. 0.]
  3097. [0. 0. 0. 0.]
  3098. [0. 0. 0. 0.]
  3099. [0. 0. 0. 0.]]
  3100. [[1. 1. 1. 1.]
  3101. [2. 2. 2. 2.]
  3102. [3. 3. 3. 3.]
  3103. [4. 4. 4. 4.]]
  3104. [[0. 0. 0. 0.]
  3105. [0. 0. 0. 0.]
  3106. [0. 0. 0. 0.]
  3107. [0. 0. 0. 0.]]]
  3108. >>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
  3109. >>> updates = Tensor(np.array([3.2, 1.1]), mindspore.float32)
  3110. >>> shape = (3, 3)
  3111. >>> output = op(indices, updates, shape)
  3112. >>> # In order to facilitate understanding, explain the operator pseudo-operation process step by step:
  3113. >>> # Step 1: Generate an empty Tensor of the specified shape according to the shape
  3114. >>> # [
  3115. >>> # [0. 0. 0.]
  3116. >>> # [0. 0. 0.]
  3117. >>> # [0. 0. 0.]
  3118. >>> # ]
  3119. >>> # Step 2: Modify the data at the specified location according to the indicators
  3120. >>> # 0th row of indices is [0, 1], 0th row of updates is 3.2.
  3121. >>> # means that the empty tensor in the 0th row and 1st col set to 3.2
  3122. >>> # [
  3123. >>> # [0. 3.2. 0.]
  3124. >>> # [0. 0. 0.]
  3125. >>> # [0. 0. 0.]
  3126. >>> # ]
  3127. >>> # 1th row of indices is [1, 1], 1th row of updates is 1.1.
  3128. >>> # means that the empty tensor in the 1th row and 1st col set to 1.1
  3129. >>> # [
  3130. >>> # [0. 3.2. 0.]
  3131. >>> # [0. 1.1 0.]
  3132. >>> # [0. 0. 0.]
  3133. >>> # ]
  3134. >>> # The final result is as follows:
  3135. >>> print(output)
  3136. [[0. 3.2 0.]
  3137. [0. 1.1 0.]
  3138. [0. 0. 0.]]
  3139. """
  3140. @prim_attr_register
  3141. def __init__(self):
  3142. """Initialize ScatterNd"""
  3143. self.init_prim_io_names(inputs=['indices', 'update', 'shape'], outputs=['output'])
  3144. def __infer__(self, indices, update, shape):
  3145. shp = shape['value']
  3146. validator.check_subclass("update_dtype", update['dtype'], mstype.tensor, self.name)
  3147. validator.check_tensor_dtype_valid("indices", indices['dtype'], [mstype.int32, mstype.int64], self.name)
  3148. validator.check_value_type("shape", shp, [tuple], self.name)
  3149. for i, x in enumerate(shp):
  3150. validator.check_positive_int(x, f'shape[{i}]', self.name)
  3151. indices_shape, update_shape = indices["shape"], update["shape"]
  3152. if indices_shape[0] != update_shape[0]:
  3153. raise ValueError(f"For '{self.name}', the first shape of 'indices' must be the same as the first shape of "
  3154. f"'updates', but got the first shape of 'indices': {indices_shape[0]}, "
  3155. f"the first shape of 'updates': {update_shape[0]}.")
  3156. return {'shape': shp,
  3157. 'dtype': update['dtype'],
  3158. 'value': None}
  3159. class ResizeNearestNeighbor(PrimitiveWithInfer):
  3160. r"""
  3161. Resizes the input tensor by using the nearest neighbor algorithm.
  3162. Resizes the input tensor to a given size by using the nearest neighbor algorithm. The nearest
  3163. neighbor algorithm selects the value of the nearest point and does not consider the
  3164. values of neighboring points at all, yielding a piecewise-constant interpolant.
  3165. Args:
  3166. size (Union[tuple, list]): The target size. The dimension of size must be 2.
  3167. align_corners (bool): Whether the centers of the 4 corner pixels of the input
  3168. and output tensors are aligned. Default: False.
  3169. Inputs:
  3170. - **input_x** (Tensor) - The input tensor. The shape of the tensor is :math:`(N, C, H, W)`.
  3171. Outputs:
  3172. Tensor, the shape of the output tensor is :math:`(N, C, NEW\_H, NEW\_W)`.
  3173. The data type is the same as the `input_x`.
  3174. Raises:
  3175. TypeError: If `size` is neither tuple nor list.
  3176. TypeError: If `align_corners` is not a bool.
  3177. ValueError: If length of `size` is not equal to 2.
  3178. Supported Platforms:
  3179. ``Ascend`` ``GPU`` ``CPU``
  3180. Examples:
  3181. >>> input_tensor = Tensor(np.array([[[[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]]]), mindspore.float32)
  3182. >>> resize = ops.ResizeNearestNeighbor((2, 2))
  3183. >>> output = resize(input_tensor)
  3184. >>> print(output)
  3185. [[[[-0.1 0.3]
  3186. [ 0.4 0.5]]]]
  3187. """
  3188. @prim_attr_register
  3189. def __init__(self, size, align_corners=False):
  3190. """Initialize ResizeNearestNeighbor"""
  3191. validator.check_value_type("size", size, [tuple, list], self.name)
  3192. validator.check_value_type("align_corners", align_corners, [bool], self.name)
  3193. validator.check_equal_int(len(size), 2, "length of size", self.name)
  3194. for i, value in enumerate(size):
  3195. validator.check_non_negative_int(value, f'{i}th value of size', self.name)
  3196. self.init_prim_io_names(inputs=['image_in'], outputs=['image_out'])
  3197. def infer_shape(self, x_shape):
  3198. validator.check('the dimension of input_x', len(x_shape), '', 4, Rel.EQ, self.name)
  3199. return tuple(x_shape)[:-2] + tuple(super().get_attr_dict()['size'])
  3200. def infer_dtype(self, x_dtype):
  3201. validator.check_tensor_dtype_valid("x", x_dtype, mstype.number_type, self.name)
  3202. return x_dtype
  3203. class GatherNd(PrimitiveWithInfer):
  3204. r"""
  3205. Gathers slices from a tensor by indices.
  3206. Using given indices to gather slices from a tensor with a specified shape.
  3207. `indices` is an K-dimensional integer tensor. Supposes it as a (K-1)-dimensional tensor and each element of it
  3208. defines a slice of `input_x`:
  3209. .. math::
  3210. output[(i_0, ..., i_{K-2})] = input\_x[indices[(i_0, ..., i_{K-2})]]
  3211. The last dimension of `indices` can not more than the rank of `input_x`:
  3212. :math:`indices.shape[-1] <= input\_x.rank`.
  3213. Inputs:
  3214. - **input_x** (Tensor) - The target tensor to gather values.
  3215. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  3216. - **indices** (Tensor) - The index tensor, with int32 or int64 data type.
  3217. The dimension of `indices` should be <= the dimension of `input_x`.
  3218. Outputs:
  3219. Tensor, has the same type as `input_x` and the shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].
  3220. Raises:
  3221. ValueError: If length of shape of `input_x` is less than the last dimension of `indices`.
  3222. Supported Platforms:
  3223. ``Ascend`` ``GPU`` ``CPU``
  3224. Examples:
  3225. >>> op = ops.GatherNd()
  3226. >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
  3227. >>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
  3228. >>> output = op(input_x, indices)
  3229. >>> print(output)
  3230. [-0.1 0.5]
  3231. """
  3232. @prim_attr_register
  3233. def __init__(self):
  3234. """Initialize GatherNd"""
  3235. self.init_prim_io_names(inputs=['input_x', 'indices'], outputs=['y'])
  3236. def infer_shape(self, x_shape, indices_shape):
  3237. validator.check('the dimension of x', len(x_shape),
  3238. 'the dimension of indices', indices_shape[-1], Rel.GE, self.name)
  3239. return indices_shape[:-1] + x_shape[indices_shape[-1]:]
  3240. def infer_dtype(self, x_dtype, indices_dtype):
  3241. validator.check_tensor_dtype_valid("indices", indices_dtype, mstype.int_type, self.name)
  3242. return x_dtype
  3243. class TensorScatterUpdate(PrimitiveWithInfer):
  3244. """
  3245. Creates a new tensor by updating the positions in `input_x` indicated by
  3246. `indices`, with values from `update`. This operation is almost equivalent to using
  3247. ScatterNd, except that the updates are applied on `input_x` instead of a zero tensor.
  3248. `indices` must have rank at least 2, the last axis is the depth of each index
  3249. vectors. For each index vector, there must be a corresponding value in `update`. If
  3250. the depth of each index tensor matches the rank of `input_x`, then each index
  3251. vector corresponds to a scalar in `input_x` and each update updates a scalar. If
  3252. the depth of each index tensor is less than the rank of `input_x`, then each index
  3253. vector corresponds to a slice in `input_x`, and each update updates a slice.
  3254. The order in which updates are applied is nondeterministic, meaning that if there
  3255. are multiple index vectors in `indices` that correspond to the same position, the
  3256. value of that position in the output will be nondeterministic.
  3257. Inputs:
  3258. - **input_x** (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1].
  3259. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  3260. The data type is Number.
  3261. - **indices** (Tensor) - The index of input tensor whose data type is int32 or int64.
  3262. The rank must be at least 2.
  3263. - **update** (Tensor) - The tensor to update the input tensor, has the same type as input,
  3264. and update.shape = indices.shape[:-1] + input_x.shape[indices.shape[-1]:].
  3265. Outputs:
  3266. Tensor, has the same shape and type as `input_x`.
  3267. Raises:
  3268. TypeError: If dtype of `indices` is neither int32 nor int64.
  3269. ValueError: If length of shape of `input_x` is less than the last dimension of shape of `indices`.
  3270. ValueError: If the value of `input_x` are not match with input `indices`.
  3271. Supported Platforms:
  3272. ``Ascend`` ``GPU`` ``CPU``
  3273. Examples:
  3274. >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
  3275. >>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
  3276. >>> update = Tensor(np.array([1.0, 2.2]), mindspore.float32)
  3277. >>> op = ops.TensorScatterUpdate()
  3278. >>> output = op(input_x, indices, update)
  3279. >>> print(output)
  3280. [[ 1. 0.3 3.6]
  3281. [ 0.4 2.2 -3.2]]
  3282. """
  3283. @prim_attr_register
  3284. def __init__(self):
  3285. self.init_prim_io_names(inputs=['input_x', 'indices', 'updates'], outputs=['y'])
  3286. def infer_shape(self, input_x_shape, indices_shape, updates_shape):
  3287. if len(indices_shape) < 2:
  3288. raise ValueError(f"For '{self.name}', the dimension of 'indices' cannot be less than 2,"
  3289. f" but got {len(indices_shape)}.")
  3290. if indices_shape[-1] > len(input_x_shape):
  3291. raise ValueError(f"For '{self.name}', the last dimension of 'indices' must be less than or equal to "
  3292. f"the dimension of 'input_x', but got the "
  3293. f"last dimension of 'indices': {indices_shape[-1]} and the dimension of 'input_x': "
  3294. f"{len(input_x_shape)}.")
  3295. updates_shape_check = indices_shape[:-1] + input_x_shape[indices_shape[-1]:]
  3296. if updates_shape_check != updates_shape:
  3297. raise ValueError(f"For '{self.name}', the shape of 'update' must be equal to updates_shape_check, "
  3298. f"where updates_shape_check = indices_shape[:-1] + input_x_shape[indices_shape[-1]:] "
  3299. f"but got the shape of 'update': {updates_shape}, "
  3300. f"updates_shape_check: {updates_shape_check}, indices_shape: {indices_shape} and "
  3301. f"input_x_shape: {input_x_shape}. Please check input_x_shape and indices_shape.")
  3302. return input_x_shape
  3303. def infer_dtype(self, input_x_dtype, indices_dtype, updates_dtype):
  3304. validator.check_tensor_dtype_valid('indices', indices_dtype, [mstype.int32, mstype.int64], self.name)
  3305. args = {"input_x": input_x_dtype, "updates": updates_dtype}
  3306. validator.check_tensors_dtypes_same_and_valid(args, (mstype.bool_,) + mstype.number_type, self.name)
  3307. return input_x_dtype
  3308. class TensorScatterAdd(PrimitiveWithInfer):
  3309. """
  3310. Creates a new tensor by adding the values from the positions in `input_x` indicated by
  3311. `indices`, with values from `updates`. When multiple values are given for the same
  3312. index, the updated result will be the sum of all values. This operation is almost
  3313. equivalent to using ScatterNdAdd, except that the updates are applied on `Tensor`
  3314. instead of `Parameter`.
  3315. The last axis of `indices` is the depth of each index vectors. For each index vector,
  3316. there must be a corresponding value in `updates`. The shape of `updates` should be
  3317. equal to the shape of `input_x[indices]`. For more details, see use cases.
  3318. Note:
  3319. If some values of the `indices` are out of bound, instead of raising an index error,
  3320. the corresponding `updates` will not be updated to `input_x`.
  3321. Inputs:
  3322. - **input_x** (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1].
  3323. - **indices** (Tensor) - The index of input tensor whose data type is int32 or int64.
  3324. The rank must be at least 2.
  3325. - **updates** (Tensor) - The tensor to update the input tensor, has the same type as input,
  3326. and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].
  3327. Outputs:
  3328. Tensor, has the same shape and type as `input_x`.
  3329. Raises:
  3330. TypeError: If dtype of `indices` is neither int32 nor int64.
  3331. ValueError: If length of shape of `input_x` is less than the last dimension of shape of `indices`.
  3332. Supported Platforms:
  3333. ``GPU``
  3334. Examples:
  3335. >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
  3336. >>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
  3337. >>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
  3338. >>> # Next, demonstrate the approximate operation process of this operator:
  3339. >>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
  3340. >>> # 2, And input_x[0, 0] = -0.1
  3341. >>> # 3, So input_x[indices] = [-0.1, -0.1]
  3342. >>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
  3343. >>> op = ops.TensorScatterAdd()
  3344. >>> # 5, Perform the addition operation for the first time:
  3345. >>> # first_input_x = input_x[0][0] + updates[0] = [[0.9, 0.3, 3.6], [0.4, 0.5, -3.2]]
  3346. >>> # 6, Perform the addition operation for the second time:
  3347. >>> # second_input_x = input_x[0][0] + updates[1] = [[3.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
  3348. >>> output = op(input_x, indices, updates)
  3349. >>> print(output)
  3350. [[ 3.1 0.3 3.6]
  3351. [ 0.4 0.5 -3.2]]
  3352. """
  3353. @prim_attr_register
  3354. def __init__(self):
  3355. self.init_prim_io_names(inputs=['input_x', 'indices', 'updates'], outputs=['y'])
  3356. def infer_shape(self, input_x_shape, indices_shape, updates_shape):
  3357. if len(indices_shape) < 2:
  3358. raise ValueError(f"For '{self.name}', the dimension of 'indices' cannot be less than 2,"
  3359. f" but got {len(indices_shape)}.")
  3360. if indices_shape[-1] > len(input_x_shape):
  3361. raise ValueError(f"For '{self.name}', the last dimension of 'indices' must be less than or equal to "
  3362. f"the dimension of 'input_x', but got the "
  3363. f"last dimension of 'indices': {indices_shape[-1]} and the dimension of 'input_x': "
  3364. f"{len(input_x_shape)}.")
  3365. updates_shape_check = indices_shape[:-1] + input_x_shape[indices_shape[-1]:]
  3366. if updates_shape_check != updates_shape:
  3367. raise ValueError(f"For '{self.name}', the shape of 'update' must be equal to updates_shape_check, "
  3368. f"where updates_shape_check = indices_shape[:-1] + input_x_shape[indices_shape[-1]:] "
  3369. f"but got the shape of 'update': {updates_shape}, "
  3370. f"updates_shape_check: {updates_shape_check}, indices_shape: {indices_shape} and "
  3371. f"input_x_shape: {input_x_shape}. Please check input_x_shape and indices_shape.")
  3372. return input_x_shape
  3373. def infer_dtype(self, input_x_dtype, indices_dtype, updates_dtype):
  3374. validator.check_tensor_dtype_valid('indices', indices_dtype, [mstype.int32, mstype.int64], self.name)
  3375. args = {"input_x": input_x_dtype, "updates": updates_dtype}
  3376. validator.check_tensors_dtypes_same_and_valid(args, (mstype.bool_,) + mstype.number_type, self.name)
  3377. return input_x_dtype
  3378. class ScatterUpdate(_ScatterOpDynamic):
  3379. r"""
  3380. Updates tensor values by using input indices and value.
  3381. Using given values to update tensor value, along with the input indices.
  3382. for each `i, ..., j` in `indices.shape`:
  3383. .. math::
  3384. \text{input_x}[\text{indices}[i, ..., j], :] = \text{updates}[i, ..., j, :]
  3385. Inputs of `input_x` and `updates` comply with the implicit type conversion rules to make the data types consistent.
  3386. If they have different data types, lower priority data type will be converted to
  3387. the relatively highest priority data type.
  3388. Args:
  3389. use_locking (bool): Whether protect the assignment by a lock. Default: True.
  3390. Inputs:
  3391. - **input_x** (Parameter) - The target tensor, with data type of Parameter.
  3392. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  3393. - **indices** (Tensor) - The index of input tensor. With int32 data type.
  3394. If there are duplicates in indices, the order for updating is undefined.
  3395. - **updates** (Tensor) - The tensor to update the input tensor, has the same type as input,
  3396. and updates.shape = indices.shape + input_x.shape[1:].
  3397. Outputs:
  3398. Tensor, has the same shape and type as `input_x`.
  3399. Raises:
  3400. TypeError: If `use_locking` is not a bool.
  3401. TypeError: If `indices` is not an int32.
  3402. RuntimeError: If the data type of `input_x` and `updates` conversion of Parameter
  3403. is required when data type conversion of Parameter is not supported.
  3404. Supported Platforms:
  3405. ``Ascend`` ``GPU`` ``CPU``
  3406. Examples:
  3407. >>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
  3408. >>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x")
  3409. >>> indices = Tensor(np.array([0, 1]), mindspore.int32)
  3410. >>> np_updates = np.array([[2.0, 1.2, 1.0], [3.0, 1.2, 1.0]])
  3411. >>> updates = Tensor(np_updates, mindspore.float32)
  3412. >>> op = ops.ScatterUpdate()
  3413. >>> output = op(input_x, indices, updates)
  3414. >>> print(output)
  3415. [[2. 1.2 1.]
  3416. [3. 1.2 1.]]
  3417. """
  3418. @prim_attr_register
  3419. def __init__(self, use_locking=True):
  3420. """Initialize ScatterUpdate"""
  3421. validator.check_value_type('use_locking', use_locking, [bool], self.name)
  3422. self.init_prim_io_names(inputs=['x', 'indices', 'updates'], outputs=['y'])
  3423. self.add_prim_attr('side_effect_mem', True)
  3424. class ScatterNdUpdate(_ScatterNdOp):
  3425. r"""
  3426. Updates tensor values by using input indices and value.
  3427. Using given values to update tensor value, along with the input indices.
  3428. `input_x` has rank P and `indices` has rank Q where `Q >= 2`.
  3429. `indices` has shape :math:`(i_0, i_1, ..., i_{Q-2}, N)` where `N <= P`.
  3430. The last dimension of `indices` (with length `N` ) indicates slices along the `N` th dimension of `input_x`.
  3431. `updates` is a tensor of rank `Q-1+P-N`. Its shape is:
  3432. :math:`(i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})`.
  3433. Inputs of `input_x` and `updates` comply with the implicit type conversion rules to make the data types consistent.
  3434. If they have different data types, lower priority data type will be converted to
  3435. the relatively highest priority data type.
  3436. Args:
  3437. use_locking (bool): Whether protect the assignment by a lock. Default: True.
  3438. Inputs:
  3439. - **input_x** (Parameter) - The target tensor, with data type of Parameter.
  3440. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  3441. - **indices** (Tensor) - The index of input tensor, with int32 data type.
  3442. - **updates** (Tensor) - The tensor to be updated to the input tensor, has the same type as input.
  3443. The shape is `indices_shape[:-1] + x_shape[indices_shape[-1]:]`.
  3444. Outputs:
  3445. Tensor, has the same shape and type as `input_x`.
  3446. Raises:
  3447. TypeError: If `use_locking` is not a bool.
  3448. TypeError: If `indices` is not an int32.
  3449. RuntimeError: If the data type of `input_x` and `updates` conversion of Parameter
  3450. is required when data type conversion of Parameter is not supported.
  3451. Supported Platforms:
  3452. ``Ascend`` ``GPU`` ``CPU``
  3453. Examples:
  3454. >>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]])
  3455. >>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x")
  3456. >>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
  3457. >>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
  3458. >>> op = ops.ScatterNdUpdate()
  3459. >>> output = op(input_x, indices, updates)
  3460. >>> print(output)
  3461. [[1. 0.3 3.6]
  3462. [0.4 2.2 -3.2]]
  3463. """
  3464. @prim_attr_register
  3465. def __init__(self, use_locking=True):
  3466. """Initialize ScatterNdUpdate"""
  3467. validator.check_value_type('use_locking', use_locking, [bool], self.name)
  3468. self.init_prim_io_names(inputs=['x', 'indices', 'value'], outputs=['y'])
  3469. self.add_prim_attr('side_effect_mem', True)
  3470. def infer_dtype(self, x_dtype, indices_dtype, value_dtype):
  3471. validator.check_tensor_dtype_valid('indices', indices_dtype, [mstype.int32], self.name)
  3472. args = {"x": x_dtype, "value": value_dtype}
  3473. validator.check_tensors_dtypes_same_and_valid(args, (mstype.bool_,) + mstype.number_type, self.name)
  3474. return x_dtype
  3475. class ScatterMax(_ScatterOp):
  3476. r"""
  3477. Updates the value of the input tensor through the maximum operation.
  3478. Using given values to update tensor value through the max operation, along with the input indices.
  3479. This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value.
  3480. for each `i, ..., j` in `indices.shape`:
  3481. .. math::
  3482. \text{input_x}[\text{indices}[i, ..., j], :]
  3483. = max(\text{input_x}[\text{indices}[i, ..., j], :], \text{updates}[i, ..., j, :])
  3484. Inputs of `input_x` and `updates` comply with the implicit type conversion rules to make the data types consistent.
  3485. If they have different data types, lower priority data type will be converted to
  3486. the relatively highest priority data type.
  3487. Args:
  3488. use_locking (bool): Whether protect the assignment by a lock. Default: True.
  3489. Inputs:
  3490. - **input_x** (Parameter) - The target tensor, with data type of Parameter.
  3491. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  3492. - **indices** (Tensor) - The index to do max operation whose data type must be mindspore.int32.
  3493. - **updates** (Tensor) - The tensor that performs the maximum operation with `input_x`,
  3494. the data type is the same as `input_x`, the shape is `indices_shape + x_shape[1:]`.
  3495. Outputs:
  3496. Tensor, the updated `input_x`, has the same shape and type as `input_x`.
  3497. Raises:
  3498. TypeError: If `use_locking` is not a bool.
  3499. TypeError: If `indices` is not an int32.
  3500. ValueError: If the shape of `updates` is not equal to `indices_shape + x_shape[1:]`.
  3501. RuntimeError: If the data type of `input_x` and `updates` conversion of Parameter
  3502. is required when data type conversion of Parameter is not supported.
  3503. Supported Platforms:
  3504. ``Ascend`` ``CPU``
  3505. Examples:
  3506. >>> input_x = Parameter(Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]), mindspore.float32),
  3507. ... name="input_x")
  3508. >>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
  3509. >>> updates = Tensor(np.ones([2, 2, 3]) * 88, mindspore.float32)
  3510. >>> scatter_max = ops.ScatterMax()
  3511. >>> output = scatter_max(input_x, indices, updates)
  3512. >>> print(output)
  3513. [[88. 88. 88.]
  3514. [88. 88. 88.]]
  3515. """
  3516. class ScatterMin(_ScatterOp):
  3517. r"""
  3518. Updates the value of the input tensor through the minimum operation.
  3519. Using given values to update tensor value through the min operation, along with the input indices.
  3520. This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value.
  3521. for each `i, ..., j` in `indices.shape`:
  3522. .. math::
  3523. \text{input_x}[\text{indices}[i, ..., j], :]
  3524. = min(\text{input_x}[\text{indices}[i, ..., j], :], \text{updates}[i, ..., j, :])
  3525. Inputs of `input_x` and `updates` comply with the implicit type conversion rules to make the data types consistent.
  3526. If they have different data types, lower priority data type will be converted to
  3527. the relatively highest priority data type.
  3528. Args:
  3529. use_locking (bool): Whether protect the assignment by a lock. Default: False.
  3530. Inputs:
  3531. - **input_x** (Parameter) - The target tensor, with data type of Parameter.
  3532. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  3533. - **indices** (Tensor) - The index to do min operation whose data type must be mindspore.int32.
  3534. - **updates** (Tensor) - The tensor doing the min operation with `input_x`,
  3535. the data type is same as `input_x`, the shape is `indices_shape + x_shape[1:]`.
  3536. Outputs:
  3537. Tensor, the updated `input_x`, has the same shape and type as `input_x`.
  3538. Raises:
  3539. TypeError: If `use_locking` is not a bool.
  3540. TypeError: If `indices` is not an int32.
  3541. ValueError: If the shape of `updates` is not equal to `indices_shape + x_shape[1:]`.
  3542. RuntimeError: If the data type of `input_x` and `updates` conversion of Parameter
  3543. is required when data type conversion of Parameter is not supported.
  3544. Supported Platforms:
  3545. ``Ascend`` ``CPU``
  3546. Examples:
  3547. >>> input_x = Parameter(Tensor(np.array([[0.0, 1.0, 2.0], [0.0, 0.0, 0.0]]), mindspore.float32),
  3548. ... name="input_x")
  3549. >>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32)
  3550. >>> update = Tensor(np.ones([2, 2, 3]), mindspore.float32)
  3551. >>> scatter_min = ops.ScatterMin()
  3552. >>> output = scatter_min(input_x, indices, update)
  3553. >>> print(output)
  3554. [[0. 1. 1.]
  3555. [0. 0. 0.]]
  3556. """
  3557. class ScatterAdd(_ScatterOpDynamic):
  3558. r"""
  3559. Updates the value of the input tensor through the addition operation.
  3560. Using given values to update tensor value through the add operation, along with the input indices.
  3561. This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value.
  3562. for each `i, ..., j` in `indices.shape`:
  3563. .. math::
  3564. \text{input_x}[\text{indices}[i, ..., j], :] \mathrel{+}= \text{updates}[i, ..., j, :]
  3565. Inputs of `input_x` and `updates` comply with the implicit type conversion rules to make the data types consistent.
  3566. If they have different data types, lower priority data type will be converted to
  3567. the relatively highest priority data type.
  3568. Note:
  3569. This is an in-place update operator. Therefore, the `input_x` will be updated after the operation is completed.
  3570. Args:
  3571. use_locking (bool): Whether protect the assignment by a lock. Default: False.
  3572. Inputs:
  3573. - **input_x** (Parameter) - The target tensor, with data type of Parameter.
  3574. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  3575. - **indices** (Tensor) - The index to do min operation whose data type must be mindspore.int32.
  3576. - **updates** (Tensor) - The tensor doing the min operation with `input_x`,
  3577. the data type is same as `input_x`, the shape is `indices_shape + x_shape[1:]`.
  3578. Outputs:
  3579. Tensor, the updated `input_x`, has the same shape and type as `input_x`.
  3580. Raises:
  3581. TypeError: If `use_locking` is not a bool.
  3582. TypeError: If `indices` is not an int32.
  3583. ValueError: If the shape of `updates` is not equal to `indices_shape + x_shape[1:]`.
  3584. RuntimeError: If the data type of `input_x` and `updates` conversion of Parameter
  3585. is required when data type conversion of Parameter is not supported.
  3586. Supported Platforms:
  3587. ``Ascend`` ``GPU`` ``CPU``
  3588. Examples:
  3589. >>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
  3590. >>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
  3591. >>> updates = Tensor(np.ones([2, 2, 3]), mindspore.float32)
  3592. >>> scatter_add = ops.ScatterAdd()
  3593. >>> output = scatter_add(input_x, indices, updates)
  3594. >>> print(output)
  3595. [[1. 1. 1.]
  3596. [3. 3. 3.]]
  3597. >>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
  3598. >>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
  3599. >>> # for indices = [[0, 1], [1, 1]]
  3600. >>> # step 1: [0, 1]
  3601. >>> # input_x[0] = [0.0, 0.0, 0.0] + [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
  3602. >>> # input_x[1] = [0.0, 0.0, 0.0] + [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
  3603. >>> # step 2: [1, 1]
  3604. >>> # input_x[1] = [3.0, 3.0, 3.0] + [7.0, 7.0, 7.0] = [10.0, 10.0, 10.0]
  3605. >>> # input_x[1] = [10.0, 10.0, 10.0] + [9.0, 9.0, 9.0] = [19.0, 19.0, 19.0]
  3606. >>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
  3607. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
  3608. ... [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
  3609. >>> scatter_add = ops.ScatterAdd()
  3610. >>> output = scatter_add(input_x, indices, updates)
  3611. >>> print(output)
  3612. [[ 1. 1. 1.]
  3613. [19. 19. 19.]]
  3614. >>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
  3615. >>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
  3616. >>> # for indices = [[1, 0], [1, 1]]
  3617. >>> # step 1: [1, 0]
  3618. >>> # input_x[0] = [0.0, 0.0, 0.0] + [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
  3619. >>> # input_x[1] = [0.0, 0.0, 0.0] + [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
  3620. >>> # step 2: [1, 1]
  3621. >>> # input_x[1] = [1.0, 1.0, 1.0] + [7.0, 7.0, 7.0] = [8.0, 8.0, 8.0]
  3622. >>> # input_x[1] = [8.0, 8.0, 8.0] + [9.0, 9.0, 9.0] = [17.0, 17.0, 17.0]
  3623. >>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
  3624. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
  3625. ... [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
  3626. >>> scatter_add = ops.ScatterAdd()
  3627. >>> output = scatter_add(input_x, indices, updates)
  3628. >>> print(output)
  3629. [[ 3. 3. 3.]
  3630. [17. 17. 17.]]
  3631. >>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
  3632. >>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
  3633. >>> # for indices = [[0, 1], [0, 1]]
  3634. >>> # step 1: [0, 1]
  3635. >>> # input_x[0] = [0.0, 0.0, 0.0] + [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
  3636. >>> # input_x[1] = [0.0, 0.0, 0.0] + [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
  3637. >>> # step 2: [0, 1]
  3638. >>> # input_x[0] = [1.0, 1.0, 1.0] + [7.0, 7.0, 7.0] = [8.0, 8.0, 8.0]
  3639. >>> # input_x[1] = [3.0, 3.0, 3.0] + [9.0, 9.0, 9.0] = [12.0, 12.0, 12.0]
  3640. >>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
  3641. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
  3642. ... [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
  3643. >>> scatter_add = ops.ScatterAdd()
  3644. >>> output = scatter_add(input_x, indices, updates)
  3645. >>> print(output)
  3646. [[ 8. 8. 8.]
  3647. [12. 12. 12.]]
  3648. """
  3649. @prim_attr_register
  3650. def __init__(self, use_locking=False):
  3651. """Initialize ScatterAdd"""
  3652. validator.check_value_type('use_locking', use_locking, [bool], self.name)
  3653. self.init_prim_io_names(inputs=['x', 'indices', 'updates'], outputs=['y'])
  3654. self.add_prim_attr('side_effect_mem', True)
  3655. class ScatterSub(_ScatterOpDynamic):
  3656. r"""
  3657. Updates the value of the input tensor through the subtraction operation.
  3658. Using given values to update tensor value through the subtraction operation, along with the input indices.
  3659. This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value.
  3660. for each `i, ..., j` in `indices.shape`:
  3661. .. math::
  3662. \text{input_x}[\text{indices}[i, ..., j], :] \mathrel{-}= \text{updates}[i, ..., j, :]
  3663. Inputs of `input_x` and `updates` comply with the implicit type conversion rules to make the data types consistent.
  3664. If they have different data types, lower priority data type will be converted to
  3665. the relatively highest priority data type.
  3666. Args:
  3667. use_locking (bool): Whether protect the assignment by a lock. Default: False.
  3668. Inputs:
  3669. - **input_x** (Parameter) - The target tensor, with data type of Parameter.
  3670. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  3671. - **indices** (Tensor) - The index to do min operation whose data type must be mindspore.int32.
  3672. - **updates** (Tensor) - The tensor doing the min operation with `input_x`,
  3673. the data type is same as `input_x`, the shape is `indices_shape + x_shape[1:]`.
  3674. Outputs:
  3675. Tensor, the updated `input_x`, has the same shape and type as `input_x`.
  3676. Raises:
  3677. TypeError: If `use_locking` is not a bool.
  3678. TypeError: If `indices` is not an int32.
  3679. ValueError: If the shape of `updates` is not equal to `indices_shape + x_shape[1:]`.
  3680. RuntimeError: If the data type of `input_x` and `updates` conversion of Parameter
  3681. is required when data type conversion of Parameter is not supported.
  3682. Supported Platforms:
  3683. ``Ascend`` ``CPU`` ``GPU``
  3684. Examples:
  3685. >>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [1.0, 1.0, 1.0]]), mindspore.float32), name="x")
  3686. >>> indices = Tensor(np.array([[0, 1]]), mindspore.int32)
  3687. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]]), mindspore.float32)
  3688. >>> scatter_sub = ops.ScatterSub()
  3689. >>> output = scatter_sub(input_x, indices, updates)
  3690. >>> print(output)
  3691. [[-1. -1. -1.]
  3692. [-1. -1. -1.]]
  3693. >>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
  3694. >>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
  3695. >>> # for indices = [[0, 1], [1, 1]]
  3696. >>> # step 1: [0, 1]
  3697. >>> # input_x[0] = [0.0, 0.0, 0.0] - [1.0, 1.0, 1.0] = [-1.0, -1.0, -1.0]
  3698. >>> # input_x[1] = [0.0, 0.0, 0.0] - [3.0, 3.0, 3.0] = [-3.0, -3.0, -3.0]
  3699. >>> # step 2: [1, 1]
  3700. >>> # input_x[1] = [-3.0, -3.0, -3.0] - [7.0, 7.0, 7.0] = [-10.0, -10.0, -10.0]
  3701. >>> # input_x[1] = [-10.0, -10.0, -10.0] - [9.0, 9.0, 9.0] = [-19.0, -19.0, -19.0]
  3702. >>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
  3703. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
  3704. ... [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
  3705. >>> scatter_sub = ops.ScatterSub()
  3706. >>> output = scatter_sub(input_x, indices, updates)
  3707. >>> print(output)
  3708. [[ -1. -1. -1.]
  3709. [-19. -19. -19.]]
  3710. >>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
  3711. >>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
  3712. >>> # for indices = [[1, 0], [1, 1]]
  3713. >>> # step 1: [1, 0]
  3714. >>> # input_x[0] = [0.0, 0.0, 0.0] - [3.0, 3.0, 3.0] = [-3.0, -3.0, -3.0]
  3715. >>> # input_x[1] = [0.0, 0.0, 0.0] - [1.0, 1.0, 1.0] = [-1.0, -1.0, -1.0]
  3716. >>> # step 2: [1, 1]
  3717. >>> # input_x[1] = [-1.0, -1.0, -1.0] - [7.0, 7.0, 7.0] = [-8.0, -8.0, -8.0]
  3718. >>> # input_x[1] = [-8.0, -8.0, -8.0] - [9.0, 9.0, 9.0] = [-17.0, -17.0, -17.0]
  3719. >>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
  3720. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
  3721. ... [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
  3722. >>> scatter_sub = ops.ScatterSub()
  3723. >>> output = scatter_sub(input_x, indices, updates)
  3724. >>> print(output)
  3725. [[ -3. -3. -3.]
  3726. [-17. -17. -17.]]
  3727. >>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
  3728. >>> input_x = Parameter(Tensor(np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]), mindspore.float32), name="x")
  3729. >>> # for indices = [[0, 1], [0, 1]]
  3730. >>> # step 1: [0, 1]
  3731. >>> # input_x[0] = [0.0, 0.0, 0.0] - [1.0, 1.0, 1.0] = [-1.0, -1.0, -1.0]
  3732. >>> # input_x[1] = [0.0, 0.0, 0.0] - [3.0, 3.0, 3.0] = [-3.0, -3.0, -3.0]
  3733. >>> # step 2: [0, 1]
  3734. >>> # input_x[0] = [-1.0, -1.0, -1.0] - [7.0, 7.0, 7.0] = [-8.0, -8.0, -8.0]
  3735. >>> # input_x[1] = [-3.0, -3.0, -3.0] - [9.0, 9.0, 9.0] = [-12.0, -12.0, -12.0]
  3736. >>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
  3737. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
  3738. ... [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
  3739. >>> scatter_sub = ops.ScatterSub()
  3740. >>> output = scatter_sub(input_x, indices, updates)
  3741. >>> print(output)
  3742. [[ -8. -8. -8.]
  3743. [-12. -12. -12.]]
  3744. """
  3745. @prim_attr_register
  3746. def __init__(self, use_locking=False):
  3747. """Initialize ScatterSub"""
  3748. validator.check_value_type('use_locking', use_locking, [bool], self.name)
  3749. self.init_prim_io_names(inputs=['x', 'indices', 'updates'], outputs=['y'])
  3750. self.add_prim_attr('side_effect_mem', True)
  3751. class ScatterMul(_ScatterOp):
  3752. r"""
  3753. Updates the value of the input tensor through the multiply operation.
  3754. Using given values to update tensor value through the mul operation, along with the input indices.
  3755. This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value.
  3756. for each `i, ..., j` in `indices.shape`:
  3757. .. math::
  3758. \text{input_x}[\text{indices}[i, ..., j], :] \mathrel{*}= \text{updates}[i, ..., j, :]
  3759. Inputs of `input_x` and `updates` comply with the implicit type conversion rules to make the data types consistent.
  3760. If they have different data types, lower priority data type will be converted to
  3761. the relatively highest priority data type.
  3762. Args:
  3763. use_locking (bool): Whether protect the assignment by a lock. Default: False.
  3764. Inputs:
  3765. - **input_x** (Parameter) - The target tensor, with data type of Parameter.
  3766. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  3767. - **indices** (Tensor) - The index to do min operation whose data type must be mindspore.int32.
  3768. - **updates** (Tensor) - The tensor doing the min operation with `input_x`,
  3769. the data type is same as `input_x`, the shape is `indices_shape + x_shape[1:]`.
  3770. Outputs:
  3771. Tensor, the updated `input_x`, has the same shape and type as `input_x`.
  3772. Raises:
  3773. TypeError: If `use_locking` is not a bool.
  3774. TypeError: If `indices` is not an int32.
  3775. ValueError: If the shape of `updates` is not equal to `indices_shape + x_shape[1:]`.
  3776. RuntimeError: If the data type of `input_x` and `updates` conversion of Parameter
  3777. is required when data type conversion of Parameter is not supported.
  3778. Supported Platforms:
  3779. ``Ascend`` ``CPU``
  3780. Examples:
  3781. >>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
  3782. >>> indices = Tensor(np.array([0, 1]), mindspore.int32)
  3783. >>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mindspore.float32)
  3784. >>> scatter_mul = ops.ScatterMul()
  3785. >>> output = scatter_mul(input_x, indices, updates)
  3786. >>> print(output)
  3787. [[2. 2. 2.]
  3788. [4. 4. 4.]]
  3789. >>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
  3790. >>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
  3791. >>> # for indices = [[0, 1], [1, 1]]
  3792. >>> # step 1: [0, 1]
  3793. >>> # input_x[0] = [1.0, 1.0, 1.0] * [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
  3794. >>> # input_x[1] = [2.0, 2.0, 2.0] * [3.0, 3.0, 3.0] = [6.0, 6.0, 6.0]
  3795. >>> # step 2: [1, 1]
  3796. >>> # input_x[1] = [6.0, 6.0, 6.0] * [7.0, 7.0, 7.0] = [42.0, 42.0, 42.0]
  3797. >>> # input_x[1] = [42.0, 42.0, 42.0] * [9.0, 9.0, 9.0] = [378.0, 378.0, 378.0]
  3798. >>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
  3799. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
  3800. ... [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
  3801. >>> scatter_mul = ops.ScatterMul()
  3802. >>> output = scatter_mul(input_x, indices, updates)
  3803. >>> print(output)
  3804. [[ 1. 1. 1.]
  3805. [378. 378. 378.]]
  3806. >>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
  3807. >>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
  3808. >>> # for indices = [[1, 0], [1, 1]]
  3809. >>> # step 1: [1, 0]
  3810. >>> # input_x[0] = [1.0, 1.0, 1.0] * [3.0, 3.0, 3.0] = [3.0, 3.0, 3.0]
  3811. >>> # input_x[1] = [2.0, 2.0, 2.0] * [1.0, 1.0, 1.0] = [2.0, 2.0, 2.0]
  3812. >>> # step 2: [1, 1]
  3813. >>> # input_x[1] = [2.0, 2.0, 2.0] * [7.0, 7.0, 7.0] = [14.0, 14.0, 14.0]
  3814. >>> # input_x[1] = [14.0, 14.0, 14.0] * [9.0, 9.0, 9.0] = [126.0, 126.0, 126.0]
  3815. >>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
  3816. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
  3817. ... [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
  3818. >>> scatter_mul = ops.ScatterMul()
  3819. >>> output = scatter_mul(input_x, indices, updates)
  3820. >>> print(output)
  3821. [[ 3. 3. 3.]
  3822. [126. 126. 126.]]
  3823. >>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
  3824. >>> input_x = Parameter(Tensor(np.array([[1.0, 1.0, 1.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
  3825. >>> # for indices = [[0, 1], [0, 1]]
  3826. >>> # step 1: [0, 1]
  3827. >>> # input_x[0] = [1.0, 1.0, 1.0] * [1.0, 1.0, 1.0] = [1.0, 1.0, 1.0]
  3828. >>> # input_x[1] = [2.0, 2.0, 2.0] * [3.0, 3.0, 3.0] = [6.0, 6.0, 6.0]
  3829. >>> # step 2: [0, 1]
  3830. >>> # input_x[0] = [1.0, 1.0, 1.0] * [7.0, 7.0, 7.0] = [7.0, 7.0, 7.0]
  3831. >>> # input_x[1] = [6.0, 6.0, 6.0] * [9.0, 9.0, 9.0] = [54.0, 54.0, 54.0]
  3832. >>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
  3833. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
  3834. ... [[7.0, 7.0, 7.0], [9.0, 9.0, 9.0]]]), mindspore.float32)
  3835. >>> scatter_mul = ops.ScatterMul()
  3836. >>> output = scatter_mul(input_x, indices, updates)
  3837. >>> print(output)
  3838. [[ 7. 7. 7.]
  3839. [54. 54. 54.]]
  3840. """
  3841. class ScatterDiv(_ScatterOp):
  3842. r"""
  3843. Updates the value of the input tensor through the divide operation.
  3844. Using given values to update tensor value through the div operation, along with the input indices.
  3845. This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value.
  3846. for each `i, ..., j` in `indices.shape`:
  3847. .. math::
  3848. \text{input_x}[\text{indices}[i, ..., j], :] \mathrel{/}= \text{updates}[i, ..., j, :]
  3849. Inputs of `input_x` and `updates` comply with the implicit type conversion rules to make the data types consistent.
  3850. If they have different data types, lower priority data type will be converted to
  3851. the relatively highest priority data type.
  3852. Args:
  3853. use_locking (bool): Whether protect the assignment by a lock. Default: False.
  3854. Inputs:
  3855. - **input_x** (Parameter) - The target tensor, with data type of Parameter.
  3856. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  3857. - **indices** (Tensor) - The index to do min operation whose data type must be mindspore.int32.
  3858. - **updates** (Tensor) - The tensor doing the min operation with `input_x`,
  3859. the data type is same as `input_x`, the shape is `indices_shape + x_shape[1:]`.
  3860. Outputs:
  3861. Tensor, the updated `input_x`, has the same shape and type as `input_x`.
  3862. Raises:
  3863. TypeError: If `use_locking` is not a bool.
  3864. TypeError: If `indices` is not an int32.
  3865. ValueError: If the shape of `updates` is not equal to `indices_shape + x_shape[1:]`.
  3866. RuntimeError: If the data type of `input_x` and `updates` conversion of Parameter
  3867. is required when data type conversion of Parameter is not supported.
  3868. Supported Platforms:
  3869. ``Ascend`` ``CPU``
  3870. Examples:
  3871. >>> input_x = Parameter(Tensor(np.array([[6.0, 6.0, 6.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
  3872. >>> indices = Tensor(np.array([0, 1]), mindspore.int32)
  3873. >>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mindspore.float32)
  3874. >>> scatter_div = ops.ScatterDiv()
  3875. >>> output = scatter_div(input_x, indices, updates)
  3876. >>> print(output)
  3877. [[3. 3. 3.]
  3878. [1. 1. 1.]]
  3879. >>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
  3880. >>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
  3881. ... [315.0, 315.0, 315.0]]), mindspore.float32), name="x")
  3882. >>> # for indices = [[0, 1], [1, 1]]
  3883. >>> # step 1: [0, 1]
  3884. >>> # input_x[0] = [105.0, 105.0, 105.0] / [1.0, 1.0, 1.0] = [105.0, 105.0, 105.0]
  3885. >>> # input_x[1] = [315.0, 315.0, 315.0] / [3.0, 3.0, 3.0] = [105.0, 105.0, 105.0]
  3886. >>> # step 2: [1, 1]
  3887. >>> # input_x[1] = [105.0, 105.0, 105.0] / [5.0, 5.0, 5.0] = [21.0, 21.0, 21.0]
  3888. >>> # input_x[1] = [21.0, 21.0, 21.0] / [7.0, 7.0, 7.0] = [3.0, 3.0, 3.0]
  3889. >>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
  3890. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
  3891. ... [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mindspore.float32)
  3892. >>> scatter_div = ops.ScatterDiv()
  3893. >>> output = scatter_div(input_x, indices, updates)
  3894. >>> print(output)
  3895. [[105. 105. 105.]
  3896. [ 3. 3. 3.]]
  3897. >>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
  3898. >>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
  3899. ... [315.0, 315.0, 315.0]]), mindspore.float32), name="x")
  3900. >>> # for indices = [[1, 0], [1, 1]]
  3901. >>> # step 1: [1, 0]
  3902. >>> # input_x[0] = [105.0, 105.0, 105.0] / [3.0, 3.0, 3.0] = [35.0, 35.0, 35.0]
  3903. >>> # input_x[1] = [315.0, 315.0, 315.0] / [1.0, 1.0, 1.0] = [315.0, 315.0, 315.0]
  3904. >>> # step 2: [1, 1]
  3905. >>> # input_x[1] = [315.0, 315.0, 315.0] / [5.0, 5.0, 5.0] = [63.0 63.0 63.0]
  3906. >>> # input_x[1] = [63.0 63.0 63.0] / [7.0, 7.0, 7.0] = [9.0, 9.0, 9.0]
  3907. >>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
  3908. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
  3909. ... [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mindspore.float32)
  3910. >>> scatter_div = ops.ScatterDiv()
  3911. >>> output = scatter_div(input_x, indices, updates)
  3912. >>> print(output)
  3913. [[35. 35. 35.]
  3914. [ 9. 9. 9.]]
  3915. >>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
  3916. >>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
  3917. ... [315.0, 315.0, 315.0]]), mindspore.float32), name="x")
  3918. >>> # for indices = [[0, 1], [0, 1]]
  3919. >>> # step 1: [0, 1]
  3920. >>> # input_x[0] = [105.0, 105.0, 105.0] / [1.0, 1.0, 1.0] = [105.0, 105.0, 105.0]
  3921. >>> # input_x[1] = [315.0, 315.0, 315.0] / [3.0, 3.0, 3.0] = [105.0, 105.0, 105.0]
  3922. >>> # step 2: [0, 1]
  3923. >>> # input_x[0] = [105.0, 105.0, 105.0] / [5.0, 5.0, 5.0] = [21.0, 21.0, 21.0]
  3924. >>> # input_x[1] = [105.0, 105.0, 105.0] / [7.0, 7.0, 7.0] = [15.0, 15.0, 15.0]
  3925. >>> indices = Tensor(np.array([[0, 1], [0, 1]]), mindspore.int32)
  3926. >>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
  3927. ... [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mindspore.float32)
  3928. >>> scatter_div = ops.ScatterDiv()
  3929. >>> output = scatter_div(input_x, indices, updates)
  3930. >>> print(output)
  3931. [[21. 21. 21.]
  3932. [15. 15. 15.]]
  3933. """
  3934. class ScatterNdAdd(_ScatterNdOp):
  3935. r"""
  3936. Applies sparse addition to individual values or slices in a tensor.
  3937. Using given values to update tensor value through the add operation, along with the input indices.
  3938. This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value.
  3939. `input_x` has rank P and `indices` has rank Q where `Q >= 2`.
  3940. `indices` has shape :math:`(i_0, i_1, ..., i_{Q-2}, N)` where `N <= P`.
  3941. The last dimension of `indices` (with length `N` ) indicates slices along the `N` th dimension of `input_x`.
  3942. `updates` is a tensor of rank `Q-1+P-N`. Its shape is:
  3943. :math:`(i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})`.
  3944. Inputs of `input_x` and `updates` comply with the implicit type conversion rules to make the data types consistent.
  3945. If they have different data types, lower priority data type will be converted to
  3946. the relatively highest priority data type.
  3947. Args:
  3948. use_locking (bool): Whether protect the assignment by a lock. Default: False.
  3949. Inputs:
  3950. - **input_x** (Parameter) - The target tensor, with data type of Parameter.
  3951. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  3952. - **indices** (Tensor) - The index to do min operation whose data type must be mindspore.int32.
  3953. The rank of indices must be at least 2 and `indices_shape[-1] <= len(shape)`.
  3954. - **updates** (Tensor) - The tensor doing the min operation with `input_x`,
  3955. the data type is same as `input_x`, the shape is `indices_shape[:-1] + x_shape[indices_shape[-1]:]`.
  3956. Outputs:
  3957. Tensor, the updated `input_x`, has the same shape and type as `input_x`.
  3958. Raises:
  3959. TypeError: If `use_locking` is not a bool.
  3960. TypeError: If `indices` is not an int32.
  3961. ValueError: If the shape of `updates` is not equal to `indices_shape[:-1] + x_shape[indices_shape[-1]:]`.
  3962. RuntimeError: If the data type of `input_x` and `updates` conversion of Parameter
  3963. is required when data type conversion of Parameter is not supported.
  3964. Supported Platforms:
  3965. ``Ascend`` ``GPU``
  3966. Examples:
  3967. >>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
  3968. >>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
  3969. >>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
  3970. >>> scatter_nd_add = ops.ScatterNdAdd()
  3971. >>> output = scatter_nd_add(input_x, indices, updates)
  3972. >>> print(output)
  3973. [ 1. 10. 9. 4. 12. 6. 7. 17.]
  3974. >>> input_x = Parameter(Tensor(np.zeros((4, 4, 4)), mindspore.int32))
  3975. >>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
  3976. >>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
  3977. ... [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
  3978. >>> scatter_nd_add = ops.ScatterNdAdd()
  3979. >>> output = scatter_nd_add(input_x, indices, updates)
  3980. >>> print(output)
  3981. [[[1 1 1 1]
  3982. [2 2 2 2]
  3983. [3 3 3 3]
  3984. [4 4 4 4]]
  3985. [[0 0 0 0]
  3986. [0 0 0 0]
  3987. [0 0 0 0]
  3988. [0 0 0 0]]
  3989. [[5 5 5 5]
  3990. [6 6 6 6]
  3991. [7 7 7 7]
  3992. [8 8 8 8]]
  3993. [[0 0 0 0]
  3994. [0 0 0 0]
  3995. [0 0 0 0]
  3996. [0 0 0 0]]]
  3997. """
  3998. class ScatterNdSub(_ScatterNdOp):
  3999. r"""
  4000. Applies sparse subtraction to individual values or slices in a tensor.
  4001. Using given values to update tensor value through the subtraction operation, along with the input indices.
  4002. This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value.
  4003. `input_x` has rank P and `indices` has rank Q where `Q >= 2`.
  4004. `indices` has shape :math:`(i_0, i_1, ..., i_{Q-2}, N)` where `N <= P`.
  4005. The last dimension of `indices` (with length `N` ) indicates slices along the `N` th dimension of `input_x`.
  4006. `updates` is a tensor of rank `Q-1+P-N`. Its shape is:
  4007. :math:`(i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})`.
  4008. Inputs of `input_x` and `updates` comply with the implicit type conversion rules to make the data types consistent.
  4009. If they have different data types, lower priority data type will be converted to
  4010. relatively highest priority data type.
  4011. Args:
  4012. use_locking (bool): Whether protect the assignment by a lock. Default: False.
  4013. Inputs:
  4014. - **input_x** (Parameter) - The target tensor, with data type of Parameter.
  4015. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  4016. - **indices** (Tensor) - The index of input tensor, with int32 data type.
  4017. The rank of indices must be at least 2 and `indices_shape[-1] <= len(shape)`.
  4018. - **updates** (Tensor) - The tensor to be updated to the input tensor, has the same type as input.
  4019. The shape is `indices_shape[:-1] + x_shape[indices_shape[-1]:]`.
  4020. Outputs:
  4021. Tensor, has the same shape and type as `input_x`.
  4022. Raises:
  4023. TypeError: If `use_locking` is not a bool.
  4024. TypeError: If `indices` is not an int32.
  4025. ValueError: If the shape of `updates` is not equal to `indices_shape[:-1] + x_shape[indices_shape[-1]:]`.
  4026. RuntimeError: If the data type of `input_x` and `updates` conversion of Parameter
  4027. is required when data type conversion of Parameter is not supported.
  4028. Supported Platforms:
  4029. ``Ascend`` ``GPU``
  4030. Examples:
  4031. >>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
  4032. >>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
  4033. >>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
  4034. >>> scatter_nd_sub = ops.ScatterNdSub()
  4035. >>> output = scatter_nd_sub(input_x, indices, updates)
  4036. >>> print(output)
  4037. [ 1. -6. -3. 4. -2. 6. 7. -1.]
  4038. >>> input_x = Parameter(Tensor(np.zeros((4, 4, 4)), mindspore.int32))
  4039. >>> indices = Tensor(np.array([[0], [2]]), mindspore.int32)
  4040. >>> updates = Tensor(np.array([[[1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3], [4, 4, 4, 4]],
  4041. ... [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]]]), mindspore.int32)
  4042. >>> scatter_nd_sub = ops.ScatterNdSub()
  4043. >>> output = scatter_nd_sub(input_x, indices, updates)
  4044. >>> print(output)
  4045. [[[-1 -1 -1 -1]
  4046. [-2 -2 -2 -2]
  4047. [-3 -3 -3 -3]
  4048. [-4 -4 -4 -4]]
  4049. [[ 0 0 0 0]
  4050. [ 0 0 0 0]
  4051. [ 0 0 0 0]
  4052. [ 0 0 0 0]]
  4053. [[-5 -5 -5 -5]
  4054. [-6 -6 -6 -6]
  4055. [-7 -7 -7 -7]
  4056. [-8 -8 -8 -8]]
  4057. [[ 0 0 0 0]
  4058. [ 0 0 0 0]
  4059. [ 0 0 0 0]
  4060. [ 0 0 0 0]]]
  4061. """
  4062. class ScatterNonAliasingAdd(_ScatterNdOp):
  4063. """
  4064. Applies sparse addition to the input using individual values or slices.
  4065. Using given values to update tensor value through the add operation, along with the input indices.
  4066. This operation outputs the `input_x` after the update is done, which makes it convenient to use the updated value.
  4067. Inputs of `input_x` and `updates` comply with the implicit type conversion rules to make the data types consistent.
  4068. If they have different data types, lower priority data type will be converted to
  4069. the relatively highest priority data type.
  4070. Inputs:
  4071. - **input_x** (Parameter) - The target parameter. The data type must be float16, float32 or int32.
  4072. - **indices** (Tensor) - The index to perform the addition operation whose data type must be mindspore.int32.
  4073. - **updates** (Tensor) - The tensor that performs the addition operation with `input_x`,
  4074. the data type is the same as `input_x`, the shape is `indices_shape[:-1] + x_shape[indices_shape[-1]:]`.
  4075. Outputs:
  4076. Parameter, the updated `input_x`.
  4077. Raises:
  4078. TypeError: If dtype of `indices` is not int32.
  4079. TypeError: If dtype of `input_x` is not one of float16, float32, int32.
  4080. ValueError: If the shape of `updates` is not equal to `indices_shape[:-1] + x_shape[indices_shape[-1]:]`.
  4081. RuntimeError: If the data type of `input_x` and `updates` conversion of Parameter
  4082. is required when data type conversion of Parameter is not supported.
  4083. Supported Platforms:
  4084. ``Ascend``
  4085. Examples:
  4086. >>> input_x = Parameter(Tensor(np.array([1, 2, 3, 4, 5, 6, 7, 8]), mindspore.float32), name="x")
  4087. >>> indices = Tensor(np.array([[2], [4], [1], [7]]), mindspore.int32)
  4088. >>> updates = Tensor(np.array([6, 7, 8, 9]), mindspore.float32)
  4089. >>> scatter_non_aliasing_add = ops.ScatterNonAliasingAdd()
  4090. >>> output = scatter_non_aliasing_add(input_x, indices, updates)
  4091. >>> print(output)
  4092. [ 1. 10. 9. 4. 12. 6. 7. 17.]
  4093. """
  4094. @prim_attr_register
  4095. def __init__(self):
  4096. """Initialize ScatterNonAliasingAdd"""
  4097. self.init_prim_io_names(inputs=['x', 'indices', 'updates'], outputs=['y'])
  4098. self.add_prim_attr('side_effect_mem', True)
  4099. def infer_dtype(self, x_dtype, indices_dtype, updates_dtype):
  4100. validator.check_tensor_dtype_valid('indices', indices_dtype, [mstype.int32], self.name)
  4101. args = {"x": x_dtype, "updates": updates_dtype}
  4102. validator.check_tensors_dtypes_same_and_valid(args, [mstype.float16, mstype.float32, mstype.int32], self.name)
  4103. return x_dtype
  4104. class SpaceToDepth(PrimitiveWithInfer):
  4105. r"""
  4106. Rearrange blocks of spatial data into depth.
  4107. The output tensor's `height` dimension is :math:`height / block\_size`.
  4108. The output tensor's `weight` dimension is :math:`weight / block\_size`.
  4109. The depth of output tensor is :math:`block\_size * block\_size * input\_depth`.
  4110. The input tensor's height and width must be divisible by `block_size`.
  4111. The data format is "NCHW".
  4112. Args:
  4113. block_size (int): The block size used to divide spatial data. It must be >= 2.
  4114. Inputs:
  4115. - **x** (Tensor) - The target tensor. The data type is Number. It must be a 4-D tensor.
  4116. Outputs:
  4117. Tensor, the same data type as `x`. It must be a 4-D tensor. Tensor of shape
  4118. :math:`(N, ( C_{in} * \text{block_size} * 2), H_{in} / \text{block_size}, W_{in} / \text{block_size})`.
  4119. Raises:
  4120. TypeError: If `block_size` is not an int.
  4121. ValueError: If `block_size` is less than 2.
  4122. ValueError: If length of shape of `x` is not equal to 4.
  4123. Supported Platforms:
  4124. ``Ascend`` ``GPU`` ``CPU``
  4125. Examples:
  4126. >>> x = Tensor(np.random.rand(1,3,2,2), mindspore.float32)
  4127. >>> block_size = 2
  4128. >>> space_to_depth = ops.SpaceToDepth(block_size)
  4129. >>> output = space_to_depth(x)
  4130. >>> print(output.shape)
  4131. (1, 12, 1, 1)
  4132. """
  4133. @prim_attr_register
  4134. def __init__(self, block_size):
  4135. """Initialize SpaceToDepth"""
  4136. self.init_prim_io_names(inputs=['x'], outputs=['y'])
  4137. validator.check_value_type('block_size', block_size, [int], self.name)
  4138. validator.check('block_size', block_size, '', 2, Rel.GE)
  4139. self.block_size = block_size
  4140. self.add_prim_attr("data_format", "NCHW")
  4141. def infer_shape(self, x_shape):
  4142. validator.check('x dimension', len(x_shape), '', 4, Rel.EQ)
  4143. out_shape = copy.deepcopy(x_shape)
  4144. for i in range(2):
  4145. if out_shape[i + 2] % self.block_size != 0:
  4146. msg_prefix = "2nd" if i + 2 == 2 else "3rd"
  4147. raise ValueError(f"For '{self.name}', the shape of output with index {i + 2} must be divided "
  4148. f"exactly by 'block_size', but got the {msg_prefix} dimension "
  4149. f"of output: {out_shape[i + 2]} and "
  4150. f"'block_size': {self.block_size}.")
  4151. out_shape[i + 2] //= self.block_size
  4152. out_shape[1] *= self.block_size * self.block_size
  4153. return out_shape
  4154. def infer_dtype(self, x_dtype):
  4155. validator.check_subclass("x_dtype", x_dtype, mstype.tensor, self.name)
  4156. return x_dtype
  4157. class DepthToSpace(PrimitiveWithInfer):
  4158. r"""
  4159. Rearrange blocks of depth data into spatial dimensions.
  4160. This is the reverse operation of SpaceToDepth.
  4161. The depth of output tensor is :math:`input\_depth / (block\_size * block\_size)`.
  4162. The output tensor's `height` dimension is :math:`height * block\_size`.
  4163. The output tensor's `weight` dimension is :math:`weight * block\_size`.
  4164. The input tensor's depth must be divisible by `block_size * block_size`.
  4165. The data format is "NCHW".
  4166. Args:
  4167. block_size (int): The block size used to divide depth data. It must be >= 2.
  4168. Inputs:
  4169. - **x** (Tensor) - The target tensor. It must be a 4-D tensor with shape :math:`(N, C_{in}, H_{in}, W_{in})`.
  4170. The data type is Number.
  4171. Outputs:
  4172. Tensor of shape :math:`(N, C_{in} / \text{block_size} ^ 2, H_{in} * \text{block_size},
  4173. W_{in} * \text{block_size})`.
  4174. Raises:
  4175. TypeError: If `block_size` is not an int.
  4176. ValueError: If `block_size` is less than 2.
  4177. ValueError: If length of shape of `x` is not equal to 4.
  4178. Supported Platforms:
  4179. ``Ascend`` ``GPU`` ``CPU``
  4180. Examples:
  4181. >>> x = Tensor(np.random.rand(1, 12, 1, 1), mindspore.float32)
  4182. >>> block_size = 2
  4183. >>> depth_to_space = ops.DepthToSpace(block_size)
  4184. >>> output = depth_to_space(x)
  4185. >>> print(output.shape)
  4186. (1, 3, 2, 2)
  4187. """
  4188. @prim_attr_register
  4189. def __init__(self, block_size):
  4190. """Initialize DepthToSpace"""
  4191. self.init_prim_io_names(inputs=['x'], outputs=['y'])
  4192. validator.check_value_type('block_size', block_size, [int], self.name)
  4193. validator.check('block_size', block_size, '', 2, Rel.GE, self.name)
  4194. self.block_size = block_size
  4195. self.add_prim_attr("data_format", "NCHW")
  4196. def infer_shape(self, x_shape):
  4197. validator.check('x dimension', len(x_shape), '', 4, Rel.EQ)
  4198. out_shape = copy.deepcopy(x_shape)
  4199. for i in range(2):
  4200. out_shape[i + 2] *= self.block_size
  4201. validator.check_int(x_shape[1] % (self.block_size * self.block_size),
  4202. 0, Rel.EQ, 'x_shape[1] % (block_size*block_size)', self.name)
  4203. out_shape[1] //= self.block_size * self.block_size
  4204. return out_shape
  4205. def infer_dtype(self, x_dtype):
  4206. validator.check_subclass("x_dtype", x_dtype, mstype.tensor, self.name)
  4207. return x_dtype
  4208. class SpaceToBatch(PrimitiveWithInfer):
  4209. r"""
  4210. Divides spatial dimensions into blocks and combines the block size with the original batch.
  4211. This operation will divide spatial dimensions (H, W) into blocks with `block_size`, the output tensor's H and W
  4212. dimension is the corresponding number of blocks after division. The output tensor's batch dimension is the
  4213. product of the original batch and the square of block_size. Before division, the spatial dimensions
  4214. of the input are zero padded according to paddings if necessary.
  4215. Args:
  4216. block_size (int): The block size of dividing blocks with value greater than or equal to 2.
  4217. paddings (Union[tuple, list]): The padding values for H and W dimension, containing 2 subtraction lists.
  4218. Each subtraction list contains 2 integer value. All values must be greater than 0.
  4219. paddings[i] specifies the paddings for the spatial dimension i, which corresponds to the
  4220. input dimension i+2. It is required that input_shape[i+2]+paddings[i][0]+paddings[i][1]
  4221. is divisible by block_size.
  4222. Inputs:
  4223. - **input_x** (Tensor) - The input tensor. It must be a 4-D tensor. The data type is Number.
  4224. Outputs:
  4225. Tensor, the output tensor with the same data type as input. Assume input shape is :math:`(n, c, h, w)` with
  4226. :math:`block\_size` and :math:`paddings`. The shape of the output tensor will be :math:`(n', c', h', w')`,
  4227. where
  4228. :math:`n' = n*(block\_size*block\_size)`
  4229. :math:`c' = c`
  4230. :math:`h' = (h+paddings[0][0]+paddings[0][1])//block\_size`
  4231. :math:`w' = (w+paddings[1][0]+paddings[1][1])//block\_size`
  4232. Raises:
  4233. TypeError: If `block_size` is not an int.
  4234. ValueError: If `block_size` is less than 2.
  4235. Supported Platforms:
  4236. ``Ascend`` ``GPU``
  4237. Examples:
  4238. >>> block_size = 2
  4239. >>> paddings = [[0, 0], [0, 0]]
  4240. >>> space_to_batch = ops.SpaceToBatch(block_size, paddings)
  4241. >>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
  4242. >>> output = space_to_batch(input_x)
  4243. >>> print(output)
  4244. [[[[1.]]]
  4245. [[[2.]]]
  4246. [[[3.]]]
  4247. [[[4.]]]]
  4248. """
  4249. @prim_attr_register
  4250. def __init__(self, block_size, paddings):
  4251. """Initialize SpaceToBatch"""
  4252. logger.warning("WARN_DEPRECATED: The usage of SpaceToBatch is deprecated."
  4253. " Please use SpaceToBatchND.")
  4254. validator.check_value_type('block_size', block_size, [int], self.name)
  4255. validator.check('block_size', block_size, '', 2, Rel.GE, self.name)
  4256. self.block_size = block_size
  4257. validator.check('paddings shape', np.array(paddings).shape, '', (2, 2), Rel.EQ, self.name)
  4258. for elem in itertools.chain(*paddings):
  4259. validator.check_non_negative_int(elem, 'paddings element', self.name)
  4260. validator.check_value_type('paddings element', elem, [int], self.name)
  4261. self.paddings = paddings
  4262. def infer_dtype(self, x_dtype):
  4263. validator.check_tensor_dtype_valid('input_x', x_dtype, mstype.number_type, self.name)
  4264. return x_dtype
  4265. def infer_shape(self, x_shape):
  4266. validator.check_equal_int(len(x_shape), 4, 'rank of input_x', self.name)
  4267. out_shape = copy.deepcopy(x_shape)
  4268. for i in range(2):
  4269. padded = out_shape[i + 2] + self.paddings[i][0] + self.paddings[i][1]
  4270. if padded % self.block_size != 0:
  4271. msg_ndim = "2nd" if i + 2 == 2 else "3rd"
  4272. raise ValueError(f"For '{self.name}', the shape of the output tensor should be "
  4273. f"divisible by 'block_size', but got the {msg_ndim} dimension of output: {padded} and "
  4274. f"'block_size': {self.block_size}. Please check the official homepage "
  4275. f"for more information about the output tensor.")
  4276. out_shape[i + 2] = padded // self.block_size
  4277. out_shape[0] *= self.block_size * self.block_size
  4278. return out_shape
  4279. class BatchToSpace(PrimitiveWithInfer):
  4280. r"""
  4281. Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions.
  4282. This operation will divide batch dimension N into blocks with block_size, the output tensor's N dimension
  4283. is the corresponding number of blocks after division. The output tensor's H, W dimension is product of
  4284. original H, W dimension and block_size with given amount to crop from dimension, respectively.
  4285. Args:
  4286. block_size (int): The block size of division, has the value not less than 2.
  4287. crops (Union[list(int), tuple(int)]): The crop value for H and W dimension, containing 2 subtraction lists.
  4288. Each list contains 2 integers.
  4289. All values must be not less than 0. crops[i] specifies the crop values for the spatial dimension i, which
  4290. corresponds to the input dimension i+2. It is required that
  4291. input_shape[i+2]*block_size >= crops[i][0]+crops[i][1].
  4292. Inputs:
  4293. - **input_x** (Tensor) - The input tensor. It must be a 4-D tensor, dimension 0 must be divisible by
  4294. product of `block_shape`. The data type is float16 or float32.
  4295. Outputs:
  4296. Tensor, the output tensor with the same type as input. Assume input shape is (n, c, h, w) with block_size
  4297. and crops. The output shape will be (n', c', h', w'), where
  4298. :math:`n' = n//(block\_size*block\_size)`
  4299. :math:`c' = c`
  4300. :math:`h' = h*block\_size-crops[0][0]-crops[0][1]`
  4301. :math:`w' = w*block\_size-crops[1][0]-crops[1][1]`
  4302. Raises:
  4303. TypeError: If `block_size` or element of `crops` is not an int.
  4304. TypeError: If `crops` is neither list nor tuple.
  4305. ValueError: If `block_size` is less than 2.
  4306. Supported Platforms:
  4307. ``Ascend`` ``GPU``
  4308. Examples:
  4309. >>> block_size = 2
  4310. >>> crops = [[0, 0], [0, 0]]
  4311. >>> batch_to_space = ops.BatchToSpace(block_size, crops)
  4312. >>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
  4313. >>> output = batch_to_space(input_x)
  4314. >>> print(output)
  4315. [[[[1. 2.]
  4316. [3. 4.]]]]
  4317. """
  4318. @prim_attr_register
  4319. def __init__(self, block_size, crops):
  4320. """Initialize BatchToSpace"""
  4321. logger.warning("WARN_DEPRECATED: The usage of BatchToSpace is deprecated."
  4322. " Please use BatchToSpaceND.")
  4323. validator.check_value_type('block_size', block_size, [int], self.name)
  4324. validator.check('block_size', block_size, '', 2, Rel.GE, self.name)
  4325. self.block_size = block_size
  4326. validator.check_value_type('crops type', crops, [list, tuple], self.name)
  4327. validator.check('crops shape', np.array(crops).shape, '', (2, 2))
  4328. for elem in itertools.chain(*crops):
  4329. validator.check_non_negative_int(elem, 'crops element', self.name)
  4330. validator.check_value_type('crops element', elem, [int], self.name)
  4331. self.crops = crops
  4332. def infer_dtype(self, x_dtype):
  4333. validator.check_tensor_dtype_valid('input_x', x_dtype, mstype.number_type, self.name)
  4334. return x_dtype
  4335. def infer_shape(self, x_shape):
  4336. validator.check('rank of input_x', len(x_shape), '', 4)
  4337. out_shape = copy.deepcopy(x_shape)
  4338. for i in range(2):
  4339. x_block_prod = out_shape[i + 2] * self.block_size
  4340. crops_sum = self.crops[i][0] + self.crops[i][1]
  4341. validator.check("x block shape prod", x_block_prod, 'crops sum', crops_sum, Rel.GT, self.name)
  4342. out_shape[i + 2] = x_block_prod - crops_sum
  4343. block_size_prod = self.block_size * self.block_size
  4344. if out_shape[0] % block_size_prod != 0:
  4345. raise ValueError(f"For '{self.name}', the shape of output with index 0 must be divided exactly "
  4346. f"by block_size_prod, but got the shape of output: {out_shape} and "
  4347. f"block_size_prod: {block_size_prod}.")
  4348. out_shape[0] = out_shape[0] // block_size_prod
  4349. return out_shape
  4350. class SpaceToBatchND(PrimitiveWithInfer):
  4351. r"""
  4352. Divides spatial dimensions into blocks and combines the block size with the original batch.
  4353. This operation will divide spatial dimensions (H, W) into blocks with block_shape, the output tensor's H and W
  4354. dimension is the corresponding number of blocks after division. The output tensor's batch dimension is the
  4355. product of the original batch and the product of `block_shape`. Before division,
  4356. the spatial dimensions of the input are zero padded according to paddings if necessary.
  4357. Args:
  4358. block_shape (Union[list(int), tuple(int), int]): The block shape of dividing block with all value greater
  4359. than 1. If `block_shape` is a tuple or list, the length of `block_shape` is M corresponding to the
  4360. number of spatial dimensions. If `block_shape` is an int, the block size of M dimensions are the same,
  4361. equal to `block_shape`. M must be 2.
  4362. paddings (Union[tuple, list]): The padding values for H and W dimension, containing 2 subtraction list.
  4363. Each contains 2 integer value. All values must be greater than 0.
  4364. `paddings[i]` specifies the paddings for the spatial dimension i,
  4365. which corresponds to the input dimension i+2.
  4366. It is required that input_shape[i+2]+paddings[i][0]+paddings[i][1] is divisible by block_shape[i].
  4367. Inputs:
  4368. - **input_x** (Tensor) - The input tensor. It must be a 4-D tensor.
  4369. Outputs:
  4370. Tensor, the output tensor with the same data type as input. Assume input shape is :math:`(n, c, h, w)` with
  4371. :math:`block\_shape` and :math:`paddings`. The shape of the output tensor will be :math:`(n', c', h', w')`,
  4372. where
  4373. :math:`n' = n*(block\_shape[0]*block\_shape[1])`
  4374. :math:`c' = c`
  4375. :math:`h' = (h+paddings[0][0]+paddings[0][1])//block\_shape[0]`
  4376. :math:`w' = (w+paddings[1][0]+paddings[1][1])//block\_shape[1]`
  4377. Raises:
  4378. TypeError: If `block_shape` is not one of list, tuple, int.
  4379. TypeError: If `paddings` is neither list nor tuple.
  4380. ValueError: If length of shape of `block_shape` is not equal to 1.
  4381. ValueError: If length of `block_shape` or `paddings` is not equal to 2.
  4382. Supported Platforms:
  4383. ``Ascend``
  4384. Examples:
  4385. >>> block_shape = [2, 2]
  4386. >>> paddings = [[0, 0], [0, 0]]
  4387. >>> space_to_batch_nd = ops.SpaceToBatchND(block_shape, paddings)
  4388. >>> input_x = Tensor(np.array([[[[1, 2], [3, 4]]]]), mindspore.float32)
  4389. >>> output = space_to_batch_nd(input_x)
  4390. >>> print(output)
  4391. [[[[1.]]]
  4392. [[[2.]]]
  4393. [[[3.]]]
  4394. [[[4.]]]]
  4395. """
  4396. @prim_attr_register
  4397. def __init__(self, block_shape, paddings):
  4398. """Initialize SpaceToBatchND"""
  4399. if isinstance(block_shape, int):
  4400. block_shape = (block_shape,) * 2
  4401. self.add_prim_attr("block_shape", block_shape)
  4402. validator.check_value_type('block_shape type', block_shape, [list, tuple], self.name)
  4403. validator.check('block_shape shape', len(np.array(block_shape).shape), '', 1, Rel.EQ, self.name)
  4404. block_rank = len(block_shape)
  4405. validator.check('block_shape length', block_rank, '', 2, Rel.EQ, self.name)
  4406. for elem in block_shape:
  4407. validator.check('block_shape element', elem, '', 1, Rel.GE, self.name)
  4408. validator.check_value_type('block_shape element', elem, [int], self.name)
  4409. self.block_shape = block_shape
  4410. validator.check_value_type('paddings type', paddings, [list, tuple], self.name)
  4411. validator.check('paddings length', len(paddings), '', 2, Rel.EQ, self.name)
  4412. validator.check('paddings shape', np.array(paddings).shape, '', (block_rank, 2), Rel.EQ, self.name)
  4413. for elem in itertools.chain(*paddings):
  4414. validator.check_non_negative_int(elem, 'paddings element', self.name)
  4415. validator.check_value_type('paddings element', elem, [int], self.name)
  4416. self.paddings = paddings
  4417. def infer_dtype(self, x_dtype):
  4418. validator.check_tensor_dtype_valid('input_x', x_dtype, mstype.number_type, self.name)
  4419. return x_dtype
  4420. def infer_shape(self, x_shape):
  4421. x_rank = len(x_shape)
  4422. validator.check_equal_int(x_rank, 4, 'x_shape rank', self.name)
  4423. out_shape = copy.deepcopy(x_shape)
  4424. block_shape_prod = 1
  4425. offset = 2
  4426. for i in range(len(self.block_shape)):
  4427. padded = out_shape[i + offset] + self.paddings[i][0] + \
  4428. self.paddings[i][1]
  4429. if padded % self.block_shape[i] != 0:
  4430. raise ValueError(f"For '{self.name}', the padded should be divisible by 'block_shape', "
  4431. f"where padded = input_x_shape[i + 2] + paddings[i][0] + paddings[i][1], "
  4432. f"but got input_x_shape[{i + 2}]: {out_shape[i + offset]}, "
  4433. f"paddings[{i}][0]: {self.paddings[i][0]} and paddings[{i}][1]: {self.paddings[i][1]}."
  4434. f" Please check the official api documents for "
  4435. f"more information about the output tensor.")
  4436. out_shape[i + offset] = padded // self.block_shape[i]
  4437. block_shape_prod = block_shape_prod * self.block_shape[i]
  4438. out_shape[0] *= block_shape_prod
  4439. return out_shape
  4440. class BatchToSpaceND(PrimitiveWithInfer):
  4441. r"""
  4442. Divides batch dimension with blocks and interleaves these blocks back into spatial dimensions.
  4443. This operation will divide batch dimension N into blocks with block_shape, the output tensor's N dimension
  4444. is the corresponding number of blocks after division. The output tensor's H, W dimension is the product of
  4445. original H, W dimension and block_shape with given amount to crop from dimension, respectively.
  4446. Args:
  4447. block_shape (Union[list(int), tuple(int), int]): The block shape of dividing block with all value greater
  4448. than 1. If `block_shape` is a tuple or list, the length of `block_shape` is M corresponding to the
  4449. number of spatial dimensions. If `block_shape` is an int, the block size of M dimensions are the same,
  4450. equal to `block_shape`. M must be 2.
  4451. crops (Union[list(int), tuple(int)]): The crop value for H and W dimension, containing 2 subtraction list,
  4452. each containing 2 int value.
  4453. All values must be >= 0. crops[i] specifies the crop values for spatial dimension i, which corresponds to
  4454. input dimension i+2. It is required that input_shape[i+2]*block_shape[i] > crops[i][0]+crops[i][1].
  4455. Inputs:
  4456. - **input_x** (Tensor) - The input tensor. It must be a 4-D tensor, dimension 0 must be divisible by
  4457. product of `block_shape`. The data type is float16 or float32.
  4458. Outputs:
  4459. Tensor, the output tensor with the same type as input. Assume input shape is (n, c, h, w) with block_shape
  4460. and crops. The output shape will be (n', c', h', w'), where
  4461. :math:`n' = n//(block\_shape[0]*block\_shape[1])`
  4462. :math:`c' = c`
  4463. :math:`h' = h*block\_shape[0]-crops[0][0]-crops[0][1]`
  4464. :math:`w' = w*block\_shape[1]-crops[1][0]-crops[1][1]`
  4465. Raises:
  4466. TypeError: If `block_shape` is not one of list, tuple, int.
  4467. TypeError: If `crops` is neither list nor tuple.
  4468. ValueError: If length of `block_shape` or `crops` is not equal to 2.
  4469. Supported Platforms:
  4470. ``Ascend``
  4471. Examples:
  4472. >>> block_shape = [2, 2]
  4473. >>> crops = [[0, 0], [0, 0]]
  4474. >>> batch_to_space_nd = ops.BatchToSpaceND(block_shape, crops)
  4475. >>> input_x = Tensor(np.array([[[[1]]], [[[2]]], [[[3]]], [[[4]]]]), mindspore.float32)
  4476. >>> output = batch_to_space_nd(input_x)
  4477. >>> print(output)
  4478. [[[[1. 2.]
  4479. [3. 4.]]]]
  4480. """
  4481. @prim_attr_register
  4482. def __init__(self, block_shape, crops):
  4483. """Initialize BatchToSpaceND"""
  4484. if isinstance(block_shape, int):
  4485. block_shape = (block_shape,) * 2
  4486. self.add_prim_attr("block_shape", block_shape)
  4487. validator.check_value_type('block_shape type', block_shape, [list, tuple], self.name)
  4488. validator.check('block_shape shape', len(np.array(block_shape).shape), '', 1, Rel.EQ, self.name)
  4489. block_rank = len(block_shape)
  4490. validator.check('block_shape length', block_rank, '', 2, Rel.EQ, self.name)
  4491. for elem in block_shape:
  4492. validator.check('block_shape element', elem, '', 1, Rel.GE, self.name)
  4493. validator.check_value_type('block_shape element', elem, [int], self.name)
  4494. self.block_shape = block_shape
  4495. validator.check_value_type('crops type', crops, [list, tuple], self.name)
  4496. validator.check('crops length', len(crops), '', 2, Rel.EQ, self.name)
  4497. validator.check('crops shape', np.array(crops).shape, '', (block_rank, 2), Rel.EQ, self.name)
  4498. for elem in itertools.chain(*crops):
  4499. validator.check_non_negative_int(elem, 'crops element', self.name)
  4500. validator.check_value_type('crops element', elem, [int], self.name)
  4501. self.crops = crops
  4502. def infer_dtype(self, x_dtype):
  4503. validator.check_tensor_dtype_valid('input_x', x_dtype, mstype.number_type, self.name)
  4504. return x_dtype
  4505. def infer_shape(self, x_shape):
  4506. x_rank = len(x_shape)
  4507. validator.check_int(x_rank, 4, Rel.EQ, 'x_shape rank', self.name)
  4508. out_shape = copy.deepcopy(x_shape)
  4509. block_shape_prod = 1
  4510. offset = 2
  4511. for i in range(len(self.block_shape)):
  4512. block_shape_prod = block_shape_prod * self.block_shape[i]
  4513. x_block_prod = out_shape[i + offset] * self.block_shape[i]
  4514. crops_sum = self.crops[i][0] + self.crops[i][1]
  4515. validator.check("x block shape prod", x_block_prod, 'crops sum', crops_sum, Rel.GT, self.name)
  4516. out_shape[i + offset] = x_block_prod - crops_sum
  4517. if out_shape[0] % block_shape_prod != 0:
  4518. raise ValueError(f"For '{self.name}', the 0th dimension of the 'input_x' should be "
  4519. f"divisible by block_shape_prod, where block_shape_prod = "
  4520. f"'block_shape[0]' * 'block_shape[1]', "
  4521. f"but got 0th dimension of the 'input_x': "
  4522. f"{out_shape[0]} and the block_shape_prod: {block_shape_prod}.")
  4523. out_shape[0] = out_shape[0] // block_shape_prod
  4524. return out_shape
  4525. class BroadcastTo(Primitive):
  4526. """
  4527. Broadcasts input tensor to a given shape.
  4528. Input shape can be broadcast to target shape if for each dimension pair they are either equal or input is one or
  4529. the target dimension is -1. In case of -1 in target shape, it will be replaced by the input shape's value
  4530. in that dimension.
  4531. When input shape is broadcast to target shape, it starts with the trailing
  4532. dimensions. If there is a -1 in the target shape, the -1 cannot be in a leading,
  4533. non-existing dimension.
  4534. Args:
  4535. shape (tuple): The target shape to broadcast. Can be fully specified, or have -1 in one position
  4536. where it will be substituted by the input tensor's shape in that position, see example.
  4537. Inputs:
  4538. - **input_x** (Tensor) - The input tensor. The data type should be one of the following types:
  4539. float16, float32, int32, int8, uint8.
  4540. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  4541. Outputs:
  4542. Tensor, with the given `shape` and the same data type as `input_x`.
  4543. Raises:
  4544. TypeError: If `shape` is not a tuple.
  4545. ValueError: if the target and input shapes are incompatible, or if a - 1 in the target shape is in an invalid
  4546. location.
  4547. Supported Platforms:
  4548. ``Ascend`` ``GPU`` ``CPU``
  4549. Examples:
  4550. >>> shape = (2, 3)
  4551. >>> input_x = Tensor(np.array([1, 2, 3]).astype(np.float32))
  4552. >>> broadcast_to = ops.BroadcastTo(shape)
  4553. >>> output = broadcast_to(input_x)
  4554. >>> print(output)
  4555. [[1. 2. 3.]
  4556. [1. 2. 3.]]
  4557. >>> shape = (-1, 2)
  4558. >>> input_x = Tensor(np.array([[1], [2]]).astype(np.float32))
  4559. >>> broadcast_to = ops.BroadcastTo(shape)
  4560. >>> output = broadcast_to(input_x)
  4561. >>> print(output)
  4562. [[1. 1.]
  4563. [2. 2.]]
  4564. """
  4565. @prim_attr_register
  4566. def __init__(self, shape):
  4567. """Initialize BroadcastTo"""
  4568. validator.check_value_type("shape", shape, (tuple), self.name)
  4569. validator.check("shape length", len(shape), "", 0, Rel.GT, self.name)
  4570. for ix, i in enumerate(shape):
  4571. validator.check_value_type('target shape index -> ' + str(ix), i, [int], self.name)
  4572. validator.check("shape element", i, "shape element min limit", -1, Rel.GE, self.name)
  4573. self.shape = shape
  4574. class Meshgrid(PrimitiveWithInfer):
  4575. """
  4576. Generates coordinate matrices from given coordinate tensors.
  4577. Given N one-dimensional coordinate tensors, returns a tuple outputs of N N-D
  4578. coordinate tensors for evaluating expressions on an N-D grid.
  4579. Args:
  4580. indexing ('xy', 'ij', optional): Cartesian ('xy', default) or
  4581. matrix ('ij') indexing of output. In the 2-D case with
  4582. inputs of length `M` and `N`, the outputs are of shape `(N, M)`
  4583. for 'xy' indexing and `(M, N)` for 'ij' indexing. In the 3-D
  4584. case with inputs of length `M`, `N` and `P`, outputs are of shape
  4585. `(N, M, P)` for 'xy' indexing and `(M, N, P)` for 'ij' indexing.
  4586. Inputs:
  4587. - **input** (Union[tuple]) - A Tuple of N 1-D Tensor objects.
  4588. The length of input should be greater than 1. The data type is Number.
  4589. Outputs:
  4590. Tensors, A Tuple of N N-D Tensor objects. The data type is the same with the Inputs.
  4591. Raises:
  4592. TypeError: If `indexing` is not a str or `input` is not a tuple.
  4593. ValueError: If `indexing` is neither 'xy' nor 'ij'.
  4594. Supported Platforms:
  4595. ``Ascend`` ``GPU``
  4596. Examples:
  4597. >>> x = Tensor(np.array([1, 2, 3, 4]).astype(np.int32))
  4598. >>> y = Tensor(np.array([5, 6, 7]).astype(np.int32))
  4599. >>> z = Tensor(np.array([8, 9, 0, 1, 2]).astype(np.int32))
  4600. >>> inputs = (x, y, z)
  4601. >>> meshgrid = ops.Meshgrid(indexing="xy")
  4602. >>> output = meshgrid(inputs)
  4603. >>> print(output)
  4604. (Tensor(shape=[3, 4, 5], dtype=Int32, value=
  4605. [[[1, 1, 1, 1, 1],
  4606. [2, 2, 2, 2, 2],
  4607. [3, 3, 3, 3, 3],
  4608. [4, 4, 4, 4, 4]],
  4609. [[1, 1, 1, 1, 1],
  4610. [2, 2, 2, 2, 2],
  4611. [3, 3, 3, 3, 3],
  4612. [4, 4, 4, 4, 4]],
  4613. [[1, 1, 1, 1, 1],
  4614. [2, 2, 2, 2, 2],
  4615. [3, 3, 3, 3, 3],
  4616. [4, 4, 4, 4, 4]]]),
  4617. Tensor(shape=[3, 4, 5], dtype=Int32, value=
  4618. [[[5, 5, 5, 5, 5],
  4619. [5, 5, 5, 5, 5],
  4620. [5, 5, 5, 5, 5],
  4621. [5, 5, 5, 5, 5]],
  4622. [[6, 6, 6, 6, 6],
  4623. [6, 6, 6, 6, 6],
  4624. [6, 6, 6, 6, 6],
  4625. [6, 6, 6, 6, 6]],
  4626. [[7, 7, 7, 7, 7],
  4627. [7, 7, 7, 7, 7],
  4628. [7, 7, 7, 7, 7],
  4629. [7, 7, 7, 7, 7]]]),
  4630. Tensor(shape=[3, 4, 5], dtype=Int32, value=
  4631. [[[8, 9, 0, 1, 2],
  4632. [8, 9, 0, 1, 2],
  4633. [8, 9, 0, 1, 2],
  4634. [8, 9, 0, 1, 2]],
  4635. [[8, 9, 0, 1, 2],
  4636. [8, 9, 0, 1, 2],
  4637. [8, 9, 0, 1, 2],
  4638. [8, 9, 0, 1, 2]],
  4639. [[8, 9, 0, 1, 2],
  4640. [8, 9, 0, 1, 2],
  4641. [8, 9, 0, 1, 2],
  4642. [8, 9, 0, 1, 2]]]))
  4643. """
  4644. @prim_attr_register
  4645. def __init__(self, indexing="xy"):
  4646. """Initialize Meshgrid."""
  4647. validator.check_value_type("indexing", indexing, (str), self.name)
  4648. validator.check_string(indexing.lower(), ["xy", "ij"], "indexing", self.name)
  4649. self.indexing = indexing
  4650. def infer_shape(self, x_shape):
  4651. validator.check_value_type("shape", x_shape, [tuple], self.name)
  4652. validator.check_int(len(x_shape), 2, Rel.GE, "len of input", self.name)
  4653. n = len(x_shape)
  4654. shape_0 = []
  4655. for s in x_shape:
  4656. validator.check_int(len(s), 1, Rel.EQ, 'each input rank', self.name)
  4657. shape_0.append(s[0])
  4658. if self.indexing == "xy":
  4659. shape_0[0], shape_0[1] = shape_0[1], shape_0[0]
  4660. out_shape = tuple(tuple(shape_0) for _ in range(n))
  4661. return out_shape
  4662. def infer_dtype(self, x_type):
  4663. validator.check_subclass("input[0]", x_type[0], mstype.tensor, self.name)
  4664. n = len(x_type)
  4665. for i in range(1, n):
  4666. validator.check('x_type[%d]' % i, x_type[i], 'base', x_type[0], Rel.EQ, self.name, TypeError)
  4667. return x_type
  4668. class InplaceUpdate(PrimitiveWithInfer):
  4669. r"""
  4670. Updates specified rows with values in `v`.
  4671. Args:
  4672. indices (Union[int, tuple]): Indices into the left-most dimension of `x`, and determines which rows of x
  4673. to update with v. It is a int or tuple, whose value is in [0, the first dimension size of x).
  4674. Inputs:
  4675. - **x** (Tensor) - A tensor which to be inplace updated. It can be one of the following data types:
  4676. float32, float16 and int32.
  4677. - **v** (Tensor) - A tensor with the same type as `x` and the same dimension size as `x` except
  4678. the first dimension, which must be the same as the size of `indices`.
  4679. Outputs:
  4680. Tensor, with the same type and shape as the input `x`.
  4681. Raises:
  4682. TypeError: If `indices` is neither int nor tuple.
  4683. TypeError: If `indices` is a tuple and its element is not an int.
  4684. Supported Platforms:
  4685. ``Ascend``
  4686. Examples:
  4687. >>> indices = (0, 1)
  4688. >>> x = Tensor(np.array([[1, 2], [3, 4], [5, 6]]), mindspore.float32)
  4689. >>> v = Tensor(np.array([[0.5, 1.0], [1.0, 1.5]]), mindspore.float32)
  4690. >>> inplace_update = ops.InplaceUpdate(indices)
  4691. >>> output = inplace_update(x, v)
  4692. >>> print(output)
  4693. [[0.5 1. ]
  4694. [1. 1.5]
  4695. [5. 6. ]]
  4696. """
  4697. @prim_attr_register
  4698. def __init__(self, indices):
  4699. """Initialize InplaceUpdate"""
  4700. self.init_prim_io_names(inputs=['x', 'v'], outputs=['y'])
  4701. self.indices = indices
  4702. validator.check_value_type("indices", indices, [int, tuple], self.name)
  4703. if isinstance(indices, int):
  4704. self.indices = (indices,)
  4705. for item in self.indices:
  4706. validator.check_value_type("item of indices", item, [int], self.name)
  4707. def infer_dtype(self, x_dtype, v_dtype):
  4708. args = {'x': x_dtype, 'v': v_dtype}
  4709. valid_type = [mstype.int32, mstype.float16, mstype.float32]
  4710. validator.check_tensors_dtypes_same_and_valid(args, valid_type, self.name)
  4711. return x_dtype
  4712. def infer_shape(self, x_shape, v_shape):
  4713. validator.check("x", len(x_shape), "v", len(v_shape), Rel.EQ, self.name)
  4714. validator.check("size of indices", len(self.indices), "v's first dimension", v_shape[0],
  4715. Rel.EQ, self.name)
  4716. for i in self.indices:
  4717. if i < 0 or i >= x_shape[0]:
  4718. raise ValueError(f"For '{self.name}', the value of indices must be in [0, {x_shape[0]}), "
  4719. f"but got {i}.")
  4720. x_rank = len(x_shape)
  4721. for idx in range(x_rank)[1:]:
  4722. validator.check('v dim %d' % idx, v_shape[idx], "x dim %d" % idx, x_shape[idx], Rel.EQ, self.name)
  4723. return x_shape
  4724. class ReverseSequence(PrimitiveWithInfer):
  4725. """
  4726. Reverses variable length slices.
  4727. Args:
  4728. seq_dim (int): The dimension where reversal is performed. Required.
  4729. batch_dim (int): The input is sliced in this dimension. Default: 0.
  4730. Inputs:
  4731. - **x** (Tensor) - The input to reverse, supporting all number types including bool.
  4732. - **seq_lengths** (Tensor) - Must be a 1-D vector with int32 or int64 types.
  4733. Outputs:
  4734. Reversed tensor with the same shape and data type as input.
  4735. Raises:
  4736. TypeError: If `seq_dim` or `batch_dim` is not an int.
  4737. Supported Platforms:
  4738. ``Ascend`` ``GPU``
  4739. Examples:
  4740. >>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
  4741. >>> seq_lengths = Tensor(np.array([1, 2, 3]))
  4742. >>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
  4743. >>> output = reverse_sequence(x, seq_lengths)
  4744. >>> print(output)
  4745. [[1. 2. 3.]
  4746. [5. 4. 6.]
  4747. [9. 8. 7.]]
  4748. >>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
  4749. >>> seq_lengths = Tensor(np.array([1, 2, 3]))
  4750. >>> reverse_sequence = ops.ReverseSequence(seq_dim=0, batch_dim=1)
  4751. >>> output = reverse_sequence(x, seq_lengths)
  4752. >>> print(output)
  4753. [[1. 5. 9.]
  4754. [4. 2. 6.]
  4755. [7. 8. 3.]]
  4756. >>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
  4757. >>> seq_lengths = Tensor(np.array([2, 2, 3]))
  4758. >>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
  4759. >>> output = reverse_sequence(x, seq_lengths)
  4760. >>> print(output)
  4761. [[2. 1. 3.]
  4762. [5. 4. 6.]
  4763. [9. 8. 7.]]
  4764. >>> x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
  4765. >>> seq_lengths = Tensor(np.array([3, 2, 3]))
  4766. >>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
  4767. >>> output = reverse_sequence(x, seq_lengths)
  4768. >>> print(output)
  4769. [[3. 2. 1.]
  4770. [5. 4. 6.]
  4771. [9. 8. 7.]]
  4772. >>> x = Tensor(np.array([[1, 2, 3, 4], [5, 6, 7, 8]]), mindspore.float32)
  4773. >>> seq_lengths = Tensor(np.array([4, 4]))
  4774. >>> reverse_sequence = ops.ReverseSequence(seq_dim=1)
  4775. >>> output = reverse_sequence(x, seq_lengths)
  4776. >>> print(output)
  4777. [[4. 3. 2. 1.]
  4778. [8. 7. 6. 5.]]
  4779. """
  4780. @prim_attr_register
  4781. def __init__(self, seq_dim, batch_dim=0):
  4782. """Initialize ReverseSequence"""
  4783. self.init_prim_io_names(inputs=['x', 'seq_lengths'], outputs=['y'])
  4784. validator.check_value_type("seq_dim", seq_dim, [int], self.name)
  4785. self.seq_dim_ = seq_dim
  4786. validator.check_value_type("batch_dim", batch_dim, [int], self.name)
  4787. self.batch_dim_ = batch_dim
  4788. def infer_shape(self, x, seq_lengths):
  4789. validator.check("seq_dim", self.seq_dim_, "x rank", len(x), Rel.LE, self.name)
  4790. validator.check("batch_dim", self.batch_dim_, "x rank", len(x), Rel.LE, self.name)
  4791. validator.check("batch_dim", self.batch_dim_, "seq_dim", self.seq_dim_, Rel.NE, self.name)
  4792. validator.check("seq_lengths rank", len(seq_lengths), "expected", 1, Rel.EQ, self.name)
  4793. validator.check("seq_lengths vector size", seq_lengths[0],
  4794. "input size along batch_dim", x[self.batch_dim_], Rel.EQ, self.name)
  4795. return x
  4796. def infer_dtype(self, x, seq_lengths):
  4797. validator.check_tensor_dtype_valid("x_dtype", x, mstype.number_type + (mstype.bool_,), self.name)
  4798. validator.check_tensor_dtype_valid("seq_lengths_dtype", seq_lengths, [mstype.int32, mstype.int64], self.name)
  4799. return x
  4800. class EditDistance(PrimitiveWithInfer):
  4801. r"""
  4802. Computes the Levenshtein Edit Distance. It is used to measure the similarity of two sequences. The inputs are
  4803. variable-length sequences provided by SparseTensors (hypothesis_indices, hypothesis_values, hypothesis_shape)
  4804. and (truth_indices, truth_values, truth_shape).
  4805. .. math::
  4806. \operatorname{lev}_{a, b}(i, j)=\left\{\begin{array}{ll}
  4807. \max (i, j) \qquad \qquad \qquad \qquad \qquad \quad \ \text { if } \min (i, j)=0 \\
  4808. \min \left\{\begin{array}{ll}
  4809. \operatorname{lev}_{a, b}(i-1, j)+1 & \\
  4810. \operatorname{lev}_{a, b}(i, j-1)+1 & \text { otherwise. } \\
  4811. \operatorname{lev}_{a, b}(i-1, j-1)+1_{\left(a_{i} \neq b_{j}\right)}
  4812. \end{array}\right. &
  4813. \end{array}\right.
  4814. Where the :math:`a` indicates the hypothesis and the :math:`a` indicates the truth. For ease of understanding,
  4815. i and j here in may be considered as lengths of a and b.
  4816. Args:
  4817. normalize (bool): If true, edit distances are normalized by length of truth. Default: True.
  4818. Inputs:
  4819. - **hypothesis_indices** (Tensor) - The indices of the hypothesis list SparseTensor. With int64 data type.
  4820. The shape of tensor is :math:`(N, R)`.
  4821. - **hypothesis_values** (Tensor) - The values of the hypothesis list SparseTensor. With float32 data type.
  4822. Must be 1-D vector with length of N.
  4823. - **hypothesis_shape** (Tensor) - The shape of the hypothesis list SparseTensor.
  4824. Must be R-length vector with int64 data type. Only constant value is allowed.
  4825. - **truth_indices** (Tensor) - The indices of the truth list SparseTensor. With int64 data type.
  4826. The shape of tensor is :math:`(M, R)`.
  4827. - **truth_values** (Tensor) - The values of the truth list SparseTensor. Must be 1-D vector with length of M.
  4828. With float32 data type.
  4829. - **truth_shape** (Tensor) - The shape of the truth list SparseTensor.
  4830. Must be R-length vector with int64 data type. Only constant value is allowed.
  4831. Outputs:
  4832. Tensor, a dense tensor with rank `R-1` and float32 data type.
  4833. Raises:
  4834. TypeError: If `normalize` is not a bool.
  4835. Supported Platforms:
  4836. ``Ascend``
  4837. Examples:
  4838. >>> import numpy as np
  4839. >>> from mindspore import context
  4840. >>> from mindspore import Tensor
  4841. >>> import mindspore.nn as nn
  4842. >>> import mindspore.ops as ops
  4843. >>> class EditDistance(nn.Cell):
  4844. ... def __init__(self, hypothesis_shape, truth_shape, normalize=True):
  4845. ... super(EditDistance, self).__init__()
  4846. ... self.edit_distance = ops.EditDistance(normalize)
  4847. ... self.hypothesis_shape = hypothesis_shape
  4848. ... self.truth_shape = truth_shape
  4849. ...
  4850. ... def construct(self, hypothesis_indices, hypothesis_values, truth_indices, truth_values):
  4851. ... return self.edit_distance(hypothesis_indices, hypothesis_values, self.hypothesis_shape,
  4852. ... truth_indices, truth_values, self.truth_shape)
  4853. ...
  4854. >>> hypothesis_indices = Tensor(np.array([[0, 0, 0], [1, 0, 1], [1, 1, 1]]).astype(np.int64))
  4855. >>> hypothesis_values = Tensor(np.array([1, 2, 3]).astype(np.float32))
  4856. >>> hypothesis_shape = Tensor(np.array([1, 1, 2]).astype(np.int64))
  4857. >>> truth_indices = Tensor(np.array([[0, 1, 0], [0, 0, 1], [1, 1, 0], [1, 0, 1]]).astype(np.int64))
  4858. >>> truth_values = Tensor(np.array([1, 3, 2, 1]).astype(np.float32))
  4859. >>> truth_shape = Tensor(np.array([2, 2, 2]).astype(np.int64))
  4860. >>> edit_distance = EditDistance(hypothesis_shape, truth_shape)
  4861. >>> output = edit_distance(hypothesis_indices, hypothesis_values, truth_indices, truth_values)
  4862. >>> print(output)
  4863. [[1. 1.]
  4864. [1. 1.]]
  4865. """
  4866. @prim_attr_register
  4867. def __init__(self, normalize=True):
  4868. """Initialize EditDistance"""
  4869. self.normalize = validator.check_value_type("normalize", normalize, [bool], self.name)
  4870. self.set_const_input_indexes([2, 5])
  4871. def __infer__(self, h_indices, h_values, h_shape, truth_indices, truth_values, truth_shape):
  4872. validator.check_valid_input('hypothesis_shape', h_shape['value'], self.name)
  4873. validator.check_valid_input('truth_shape', truth_shape['value'], self.name)
  4874. args_int = {"hypothesis_indices": h_indices['dtype'], "hypothesis_shape": h_shape['dtype'],
  4875. "truth_indices": truth_indices['dtype'], "truth_shape": truth_shape['dtype']}
  4876. validator.check_tensors_dtypes_same_and_valid(args_int, [mstype.int64], self.name)
  4877. args = {"hypothesis_values": h_values['dtype'], "truth_values": truth_values['dtype']}
  4878. validator.check_tensors_dtypes_same_and_valid(args, mstype.number_type, self.name)
  4879. hypothesis_indices_shp, truth_indices_shp = h_indices['shape'], truth_indices['shape']
  4880. validator.check("hypothesis_indices rank", len(hypothesis_indices_shp), "expected", 2, Rel.EQ, self.name)
  4881. validator.check("truth_indices rank", len(truth_indices_shp), "expected", 2, Rel.EQ, self.name)
  4882. validator.check("hypothesis_values rank", len(h_values['shape']), "expected", 1, Rel.EQ, self.name)
  4883. validator.check("hypothesis_shape rank", len(h_shape['shape']), "expected", 1, Rel.EQ, self.name)
  4884. validator.check("truth_values rank", len(truth_values['shape']), "expected", 1, Rel.EQ, self.name)
  4885. validator.check("truth_shape rank", len(truth_shape['shape']), "expected", 1, Rel.EQ, self.name)
  4886. validator.check("hypothesis_values shape", h_values['shape'][0],
  4887. "hypothesis_indices shape[0]", hypothesis_indices_shp[0], Rel.EQ, self.name)
  4888. validator.check("hypothesis_shape", h_shape['shape'][0],
  4889. "hypothesis_indices shape[1]", hypothesis_indices_shp[1], Rel.EQ, self.name)
  4890. validator.check("truth_values shape", truth_values['shape'][0],
  4891. "truth_indices shape[0]", truth_indices_shp[0], Rel.EQ, self.name)
  4892. validator.check("hypothesis_shape", h_shape['shape'][0],
  4893. "truth_shape", truth_shape['shape'][0], Rel.EQ, self.name)
  4894. hypothesis_shape_v = h_shape['value'].asnumpy()
  4895. truth_shape_v = truth_shape['value'].asnumpy()
  4896. out_shape_rank = len(hypothesis_shape_v) - 1
  4897. out_shape = []
  4898. for i in range(out_shape_rank):
  4899. out_shape.append(max(hypothesis_shape_v[i], truth_shape_v[i]))
  4900. return {'shape': tuple(out_shape),
  4901. 'dtype': mstype.tensor_type(mstype.float32),
  4902. 'value': None}
  4903. class TransShape(PrimitiveWithInfer):
  4904. """
  4905. Transforms the shape of input tensor to target shape.
  4906. Inputs:
  4907. - **input_x** (Tensor) - A input tensor.
  4908. - **out_shape** (tuple[int]) - The shape of output data.
  4909. Outputs:
  4910. Tensor, a tensor whose data type is same as 'input_x', and the shape is the same as the `out_shape`.
  4911. """
  4912. @prim_attr_register
  4913. def __init__(self):
  4914. """Initialize TransShape."""
  4915. self.__setattr_flag__ = True
  4916. def __infer__(self, x, shape):
  4917. shp = shape['value']
  4918. dtype = x['dtype']
  4919. validator.check_tensor_dtype_valid('x', dtype, mstype.number_type + (mstype.bool_,), self.name)
  4920. self.add_prim_attr('out_shape', tuple(shp))
  4921. return {'shape': shp,
  4922. 'dtype': dtype,
  4923. 'value': None}
  4924. class Sort(Primitive):
  4925. """
  4926. Sorts the elements of the input tensor along a given dimension in ascending order by value.
  4927. Args:
  4928. axis (int): The dimension to sort along. Default: -1.
  4929. descending (bool): Controls the sorting order. If descending is True then the elements
  4930. are sorted in descending order by value. Default: False.
  4931. .. warning::
  4932. Currently, only the data type of Float16 is supported. If use Float32, it may cause loss
  4933. of accuracy.
  4934. Inputs:
  4935. - **x** (Tensor) - The input to sort, with float16 or float32 data type.
  4936. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  4937. Outputs:
  4938. - **y1** (Tensor) - A tensor whose values are the sorted values, with the same shape and data type as input.
  4939. - **y2** (Tensor) - The indices of the elements in the original input tensor. Data type is int32.
  4940. Raises:
  4941. TypeError: If `axis` is not an int.
  4942. TypeError: If `descending` is not a bool.
  4943. TypeError: If dtype of `x` is neither float16 nor float32.
  4944. Supported Platforms:
  4945. ``Ascend`` ``GPU`` ``CPU``
  4946. Examples:
  4947. >>> x = Tensor(np.array([[8, 2, 1], [5, 9, 3], [4, 6, 7]]), mindspore.float16)
  4948. >>> sort = ops.Sort()
  4949. >>> output = sort(x)
  4950. >>> print(output)
  4951. (Tensor(shape=[3, 3], dtype=Float16, value=
  4952. [[ 1.0000e+00, 2.0000e+00, 8.0000e+00],
  4953. [ 3.0000e+00, 5.0000e+00, 9.0000e+00],
  4954. [ 4.0000e+00, 6.0000e+00, 7.0000e+00]]), Tensor(shape=[3, 3], dtype=Int32, value=
  4955. [[2, 1, 0],
  4956. [2, 0, 1],
  4957. [0, 1, 2]]))
  4958. """
  4959. @prim_attr_register
  4960. def __init__(self, axis=-1, descending=False):
  4961. """Initialize Sort"""
  4962. self.axis = validator.check_value_type("axis", axis, [int], self.name)
  4963. self.descending = validator.check_value_type("descending", descending, [bool], self.name)
  4964. self.init_prim_io_names(inputs=['x'], outputs=['y1', 'y2'])
  4965. class EmbeddingLookup(PrimitiveWithCheck):
  4966. """
  4967. Returns a slice of input tensor based on the specified indices.
  4968. This Primitive has the similar functionality as GatherV2 operating on `axis = 0`, but has one more inputs:
  4969. `offset`.
  4970. Inputs:
  4971. - **input_params** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  4972. This represents a Tensor slice, instead of the entire Tensor. Currently, the dimension is restricted to be 2.
  4973. - **input_indices** (Tensor) - The shape of tensor is :math:`(y_1, y_2, ..., y_S)`.
  4974. Specifies the indices of elements of the original Tensor. Values can be out of range of `input_params`,
  4975. and the exceeding part will be filled with 0 in the output. Values do not support negative and the result
  4976. is undefined if values are negative. The data type should be int32 or int64.
  4977. - **offset** (int) - Specifies the offset value of this `input_params` slice. Thus the real indices
  4978. are equal to `input_indices` minus `offset`.
  4979. Outputs:
  4980. Tensor, the shape of tensor is :math:`(z_1, z_2, ..., z_N)`. The data type is the same with `input_params`.
  4981. Raises:
  4982. TypeError: If dtype of `input_indices` is not int.
  4983. ValueError: If length of shape of `input_params` is greater than 2.
  4984. Supported Platforms:
  4985. ``Ascend`` ``CPU`` ``GPU``
  4986. Examples:
  4987. >>> input_params = Tensor(np.array([[8, 9], [10, 11], [12, 13], [14, 15]]), mindspore.float32)
  4988. >>> input_indices = Tensor(np.array([[5, 2], [8, 5]]), mindspore.int32)
  4989. >>> offset = 4
  4990. >>> output = ops.EmbeddingLookup()(input_params, input_indices, offset)
  4991. >>> print(output)
  4992. [[[10. 11.]
  4993. [ 0. 0.]]
  4994. [[ 0. 0.]
  4995. [10. 11.]]]
  4996. """
  4997. @prim_attr_register
  4998. def __init__(self):
  4999. """Initialize EmbeddingLookup."""
  5000. self.__setattr_flag__ = True
  5001. self.init_prim_io_names(inputs=['params', 'indices', 'offset'],
  5002. outputs=['output'])
  5003. def __check__(self, params, indices, offset):
  5004. validator.check_subclass("params", params['dtype'], mstype.tensor, self.name)
  5005. validator.check_tensor_dtype_valid("indices", indices['dtype'], mstype.int_type, self.name)
  5006. validator.check_subclass("offset", offset['dtype'], mstype.int_, self.name)
  5007. indices_shp = indices['shape']
  5008. if not indices_shp:
  5009. raise ValueError(f"For '{self.name}', the dimension of 'input_indices' should not "
  5010. f"be zero, but got {len(indices_shp)}.")
  5011. params_shp = params['shape']
  5012. if len(params_shp) > 2:
  5013. raise ValueError(f"For '{self.name}', the dimension of 'input_params' must <= 2, "
  5014. f"but got {len(params_shp)}.")
  5015. class GatherD(Primitive):
  5016. """
  5017. Gathers values along an axis specified by dim.
  5018. For a 3-D tensor, the output is:
  5019. .. code-block::
  5020. output[i][j][k] = x[index[i][j][k]][j][k] # if dim == 0
  5021. output[i][j][k] = x[i][index[i][j][k]][k] # if dim == 1
  5022. output[i][j][k] = x[i][j][index[i][j][k]] # if dim == 2
  5023. If `x` is an n-D tensor with shape :math:`(z_0, z_1, ..., z_i, ..., z_{n-1})` and `dim` = i,
  5024. the `index` must be an n-D tensor with shape :math:`(z_0, z_1, ..., y, ..., z_{n-1})`
  5025. where `y`>=1 and the output will have the same shape as `index`.
  5026. Inputs:
  5027. - **x** (Tensor) - The source tensor.
  5028. The shape is :math:`(N,*)` where :math:`*` means,any number of additional dimensions.
  5029. - **dim** (int) - The axis along which to index. It must be int32 or int64. Only constant value is allowed.
  5030. - **index** (Tensor) - The indices of elements to gather. It can be one of the following data types:
  5031. int32, int64. The value range of each index element is [-x_rank[dim], x_rank[dim]).
  5032. Outputs:
  5033. Tensor, the shape of tensor is :math:`(z_1, z_2, ..., z_N)`, has the same data type with `x`.
  5034. Raises:
  5035. TypeError: If dtype of `dim` or `index` is neither int32 nor int64.
  5036. ValueError: If length of shape of `x` is not equal to length of shape of `index`.
  5037. Supported Platforms:
  5038. ``Ascend`` ``GPU`` ``CPU``
  5039. Examples:
  5040. >>> x = Tensor(np.array([[1, 2], [3, 4]]), mindspore.int32)
  5041. >>> index = Tensor(np.array([[0, 0], [1, 0]]), mindspore.int32)
  5042. >>> dim = 1
  5043. >>> output = ops.GatherD()(x, dim, index)
  5044. >>> print(output)
  5045. [[1 1]
  5046. [4 3]]
  5047. """
  5048. @prim_attr_register
  5049. def __init__(self):
  5050. """Initialize GatherD"""
  5051. self.init_prim_io_names(inputs=['x', 'dim', 'index'], outputs=['output'])
  5052. class Identity(PrimitiveWithInfer):
  5053. """
  5054. Returns a Tensor with the same shape and contents as input.
  5055. Inputs:
  5056. - **x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`. The data type is Number.
  5057. Outputs:
  5058. Tensor, the shape of tensor and the data type are the same as `input_x`, :math:`(x_1, x_2, ..., x_R)`.
  5059. Raises:
  5060. TypeError: If `x` is not a Tensor.
  5061. Supported Platforms:
  5062. ``Ascend`` ``CPU`` ``GPU``
  5063. Examples:
  5064. >>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int64)
  5065. >>> output = ops.Identity()(x)
  5066. >>> print(output)
  5067. [1 2 3 4]
  5068. """
  5069. # Side effect is identity with input.
  5070. side_effect_propagate = 1
  5071. @prim_attr_register
  5072. def __init__(self):
  5073. """Initialize identity"""
  5074. self.add_prim_attr('side_effect_propagate', 1)
  5075. def __infer__(self, x):
  5076. validator.check_subclass("x", x['dtype'], mstype.tensor, self.name)
  5077. validator.check_tensor_dtype_valid('x', x['dtype'], mstype.number_type + (mstype.bool_,), self.name)
  5078. out = {'shape': x['shape'],
  5079. 'dtype': x['dtype'],
  5080. 'value': None}
  5081. return out
  5082. class Range(PrimitiveWithCheck):
  5083. r"""
  5084. Creates a sequence of numbers that begins at `start` and extends by increments of
  5085. `delta` up to but not including `limit`.
  5086. The types of all 3 inputs must be the same. The type of the resulting tensor is
  5087. the same as the type of the inputs.
  5088. Args:
  5089. maxlen (int): Memory that can fit `maxlen` many elements
  5090. will be allocated for the output. Optional, must be positive, defaults to 1000000.
  5091. If the output has more than `maxlen` elements, a runtime error
  5092. will occur.
  5093. Inputs:
  5094. - **start** (Tensor) - A scalar Tensor. The first number in the sequence. Must have
  5095. type: int32 or float32
  5096. - **limit** (Tensor) - A scalar Tensor. Upper limit of the sequence, exclusive. Must
  5097. have type: int32 or float32
  5098. - **delta** (Tensor) - A scalar Tensor. Number that increments `start`. Must have
  5099. type: int32 or float32
  5100. Outputs:
  5101. A 1-D Tensor, with the same type as the inputs.
  5102. Supported Platforms:
  5103. ``GPU``
  5104. Examples:
  5105. >>> start = Tensor(0, mstype.int32)
  5106. >>> limit = Tensor(10, mstype.int32)
  5107. >>> delta = Tensor(4, mstype.int32)
  5108. >>> output = ops.Range()(start, limit, delta)
  5109. >>> print(output)
  5110. [0, 4, 8]
  5111. """
  5112. @prim_attr_register
  5113. def __init__(self, maxlen=1000000):
  5114. self.init_prim_io_names(inputs=['start', 'limit', 'delta'], outputs=['output'])
  5115. validator.check_value_type("maxlen", maxlen, [int], self.name)
  5116. validator.check_positive_int(maxlen, "maxlen", self.name)
  5117. self.maxlen = maxlen
  5118. self.add_prim_attr('maxlen', maxlen)
  5119. def check_shape(self, start_shape, limit_shape, delta_shape):
  5120. validator.check("start_shape", len(start_shape), "", 0, Rel.EQ, self.name)
  5121. validator.check("limit_shape", len(limit_shape), "", 0, Rel.EQ, self.name)
  5122. validator.check("delta_shape", len(delta_shape), "", 0, Rel.EQ, self.name)
  5123. def check_dtype(self, start_dtype, limit_dtype, delta_dtype):
  5124. valid_dtypes = [mstype.int32, mstype.float32]
  5125. inputs = {"start": start_dtype, "limit": limit_dtype, "delta": delta_dtype}
  5126. validator.check_tensors_dtypes_same_and_valid(inputs, valid_dtypes, self.name)
  5127. def infer_value(self, start_value, limit_value, delat_value):
  5128. """Infer the value of input for Range."""
  5129. if start_value is not None and limit_value is not None and delat_value is not None:
  5130. start = np.asscalar(start_value.asnumpy())
  5131. limit = np.asscalar(limit_value.asnumpy())
  5132. delat = np.asscalar(delat_value.asnumpy())
  5133. return Tensor(np.arange(start, limit, delat), dtype=start_value.dtype)
  5134. return None
  5135. class MaskedFill(Primitive):
  5136. """
  5137. Fills elements of self tensor with value where mask is True.
  5138. The shapes of `input` and `mask` need to be the same or broadcast.
  5139. Inputs:
  5140. - **input** (Tensor) - The source tensor whose data type is one of float16, float32, int8, int32.
  5141. - **mask** (Tensor[bool]) - The boolean mask.
  5142. - **value** (Union[float, Tensor]) – The value to fill in with, which only supports
  5143. a 0-dimensional tensor or a float number.
  5144. Outputs:
  5145. Tensor, has the same type and shape as `input`.
  5146. Raises:
  5147. TypeError: If `input` or `mask` is not a tensor.
  5148. TypeError: If `value` is neither float number nor tensor.
  5149. TypeError: If dtype of `input` or `value` is not one of float16, float32, int8, int32.
  5150. TypeError: If dtype of `value` is different from that of `input`.
  5151. TypeError: If dtype of `mask` is not bool.
  5152. ValueError: If the shapes of `input` and `mask` could not be broadcast.
  5153. Supported Platforms:
  5154. ``Ascend``
  5155. Examples:
  5156. >>> input = Tensor(np.array([1., 2., 3., 4.]), mindspore.float32)
  5157. >>> mask = Tensor(np.array([True, True, False, True]), mindspore.bool_)
  5158. >>> output = ops.MaskedFill()(input, mask, 0.5)
  5159. >>> print(output)
  5160. [0.5 0.5 3. 0.5]
  5161. """
  5162. @prim_attr_register
  5163. def __init__(self):
  5164. self.init_prim_io_names(inputs=['input', 'mask', 'value'], outputs=['output'])
  5165. class MaskedSelect(PrimitiveWithCheck):
  5166. """
  5167. Returns a new 1-D Tensor which indexes the input tensor according to the boolean mask.
  5168. The shapes of the mask tensor and the input tensor don't need to match, but they must be broadcastable.
  5169. Inputs:
  5170. - **x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  5171. - **mask** (Tensor[bool]) - The shape of tensor is :math:`(x_1, x_2, ..., x_R)`.
  5172. Outputs:
  5173. A 1-D Tensor, with the same type as x.
  5174. Raises:
  5175. TypeError: If `x` is not a Tensor.
  5176. Supported Platforms:
  5177. ``Ascend`` ``CPU``
  5178. Examples:
  5179. >>> x = Tensor(np.array([1, 2, 3, 4]), mindspore.int64)
  5180. >>> mask = Tensor(np.array([1, 0, 1, 0]), mindspore.bool_)
  5181. >>> output = ops.MaskedSelect()(x, mask)
  5182. >>> print(output)
  5183. [1 3]
  5184. """
  5185. @prim_attr_register
  5186. def __init__(self):
  5187. self.init_prim_io_names(inputs=['x', 'mask'], outputs=['output'])
  5188. def check_shape(self, x_shape, mask_shape):
  5189. get_broadcast_shape(x_shape, mask_shape, self.name, arg_name1="x", arg_name2="mask")
  5190. def check_dtype(self, x_dtype, mask_dtype):
  5191. validator.check_tensor_dtype_valid('mask', mask_dtype, [mstype.bool_], self.name)
  5192. validator.check_tensor_dtype_valid('x', x_dtype, mstype.number_type, self.name)
  5193. class SearchSorted(PrimitiveWithInfer):
  5194. """
  5195. Find the indices from the innermost dimension of `sequence` such that the order of the innermost dimension
  5196. within `sequence` would be preserved when the corresponding values in `values` were inserted before the indices.
  5197. Args:
  5198. out_int32 (bool): Output datatype. Optional. If True, the output datatype will be int32;
  5199. if False, the output datatype will be int64. Default is False.
  5200. right (bool): Search Strategy. Optional. If True, return the last suitable index found.
  5201. If False, return the first such index. Default is False.
  5202. Inputs:
  5203. - **sequence** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ..., x_R-1, x_R)` or `(x_1)`.
  5204. It must contain monitonically increasing sequence on the innermost dimension.
  5205. - **values** (Tensor) - The shape of tensor is : math:`(x_1, x_2, ..., x_R-1, x_S)`.
  5206. Outputs:
  5207. Tensor containing the indices from the innermost dimension of the input sequence such that,
  5208. if insert the corresponding value in the values tensor, the order of the tensor sequence would be preserved.
  5209. The shape of tensor is :math:`(x_1, x_2, ..., x_R-1, x_S)`,
  5210. whose datatype is int32 if out_int32 is True, otherwise int64, and shape is the same as the shape of values.
  5211. Raises:
  5212. ValueError: If `sequence` and `values` do not have proper shapes.
  5213. Supported Platforms:
  5214. ``CPU``
  5215. Examples:
  5216. >>> sequence = Tensor(np.array([[0, 1, 3, 5, 7], [2, 4, 6, 8, 10]]), mindspore.float32)
  5217. >>> values = Tensor(np.array([[3, 6, 9], [3, 6, 9]]), mindspore.float32)
  5218. >>> output = ops.SearchSorted()(sequence, values)
  5219. >>> print(output)
  5220. [[2, 4, 5]
  5221. [1, 2, 4]]
  5222. """
  5223. @prim_attr_register
  5224. def __init__(self, out_int32=False, right=False):
  5225. """Initialize SearchSorted"""
  5226. self.out_int32 = validator.check_value_type("out_int32", out_int32, [bool], self.name)
  5227. self.right = validator.check_value_type("right", right, [bool], self.name)
  5228. self.init_prim_io_names(inputs=['sequence', 'values'], outputs=['positions'])
  5229. def infer_shape(self, sequence_shape, values_shape):
  5230. if len(sequence_shape) != 1 and sequence_shape[:-1] != values_shape[:-1]:
  5231. raise ValueError(f"For '{self.name}', the 'sequence' should be 1 dimensional or "
  5232. f"all dimensions except the last dimension of 'sequence' "
  5233. f"must be the same as all dimensions except the last dimension of 'values'. "
  5234. f"but got shape of 'sequence': {sequence_shape} "
  5235. f"and shape of 'values': {values_shape}.")
  5236. return values_shape
  5237. def infer_dtype(self, sequence_dtype, values_dtype):
  5238. args = {"sequence_dtype": sequence_dtype, "values_dtype": values_dtype}
  5239. validator.check_tensors_dtypes_same_and_valid(args, mstype.number_type, self.name)
  5240. return mstype.tensor_type(mstype.int32) if self.out_int32 else mstype.tensor_type(mstype.int64)
  5241. class TensorScatterMax(PrimitiveWithInfer):
  5242. """
  5243. By comparing the value at the position indicated by the index in input_x with the value in the update,
  5244. the value at the index will eventually be equal to the largest one to create a new tensor.
  5245. The last axis of the index is the depth of each index vector. For each index vector,
  5246. there must be a corresponding value in `updates`. The shape of `updates` should be
  5247. equal to the shape of input_x[indices].
  5248. For more details, see use cases.
  5249. Note:
  5250. If some values of the `indices` are out of bound, instead of raising an index error,
  5251. the corresponding `updates` will not be updated to `input_x`.
  5252. Inputs:
  5253. - **input_x** (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1].
  5254. - **indices** (Tensor) - The index of input tensor whose data type is int32 or int64.
  5255. The rank must be at least 2.
  5256. - **updates** (Tensor) - The tensor to update the input tensor, has the same type as input,
  5257. and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].
  5258. Outputs:
  5259. Tensor, has the same shape and type as `input_x`.
  5260. Raises:
  5261. TypeError: If dtype of `indices` is neither int32 nor int64.
  5262. ValueError: If length of shape of `input_x` is less than the last dimension of shape of `indices`.
  5263. Supported Platforms:
  5264. ``GPU``
  5265. Examples:
  5266. >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
  5267. >>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
  5268. >>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
  5269. >>> # Next, demonstrate the approximate operation process of this operator:
  5270. >>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
  5271. >>> # 2, And input_x[0, 0] = -0.1
  5272. >>> # 3, So input_x[indices] = [-0.1, -0.1]
  5273. >>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
  5274. >>> op = ops.TensorScatterMax()
  5275. >>> # 5, Perform the max operation for the first time:
  5276. >>> # first_input_x = Max(input_x[0][0], updates[0]) = [[2.2, 0.3, 3.6], [0.4, 0.5, -3.2]]
  5277. >>> # 6, Perform the max operation for the second time:
  5278. >>> # second_input_x = Max(input_x[0][0], updates[0]) = [[2.2, 0.3, 3.6], [0.4, 0.5, -3.2]]
  5279. >>> output = op(input_x, indices, updates)
  5280. >>> print(output)
  5281. [[ 2.2 0.3 3.6]
  5282. [ 0.4 0.5 -3.2]]
  5283. """
  5284. @prim_attr_register
  5285. def __init__(self):
  5286. self.init_prim_io_names(inputs=['input_x', 'indices', 'updates'], outputs=['y'])
  5287. def infer_shape(self, input_x_shape, indices_shape, updates_shape):
  5288. if len(indices_shape) < 2:
  5289. raise ValueError(f"For '{self.name}', the dimension of 'indices' cannot be less than 2,"
  5290. f" but got {len(indices_shape)}.")
  5291. if indices_shape[-1] > len(input_x_shape):
  5292. raise ValueError(f"For '{self.name}', the last dimension of 'indices' must be less than or equal to "
  5293. f"the dimension of 'input_x', but got the "
  5294. f"last dimension of 'indices': {indices_shape[-1]} and the dimension of 'input_x': "
  5295. f"{len(input_x_shape)}.")
  5296. updates_shape_check = indices_shape[:-1] + input_x_shape[indices_shape[-1]:]
  5297. if updates_shape_check != updates_shape:
  5298. raise ValueError(f"For '{self.name}', the shape of 'update' must be equal to updates_shape_check, "
  5299. f"where updates_shape_check = indices_shape[:-1] + input_x_shape[indices_shape[-1]:] "
  5300. f"but got the shape of 'update': {updates_shape}, "
  5301. f"updates_shape_check: {updates_shape_check}, indices_shape: {indices_shape} and "
  5302. f"input_x_shape: {input_x_shape}. Please check input_x_shape and indices_shape.")
  5303. return input_x_shape
  5304. def infer_dtype(self, input_x_dtype, indices_dtype, updates_dtype):
  5305. validator.check_tensor_dtype_valid('indices', indices_dtype, [mstype.int32, mstype.int64], self.name)
  5306. args = {"input_x": input_x_dtype, "updates": updates_dtype}
  5307. valid_input_types = (mstype.float16, mstype.float32, mstype.int8, mstype.uint8, mstype.int32)
  5308. validator.check_tensors_dtypes_same_and_valid(args, valid_input_types, self.name)
  5309. return input_x_dtype
  5310. class TensorScatterMin(PrimitiveWithInfer):
  5311. """
  5312. By comparing the value at the position indicated by the index in input_x with the value in the `updates`,
  5313. the value at the index will eventually be equal to the smallest one to create a new tensor.
  5314. The last axis of the index is the depth of each index vector. For each index vector,
  5315. there must be a corresponding value in `updates`. The shape of `updates` should be
  5316. equal to the shape of input_x[indices].
  5317. For more details, see use cases.
  5318. Note:
  5319. If some values of the `indices` are out of bound, instead of raising an index error,
  5320. the corresponding `updates` will not be updated to `input_x`.
  5321. Inputs:
  5322. - **input_x** (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1].
  5323. - **indices** (Tensor) - The index of input tensor whose data type is int32 or int64.
  5324. The rank must be at least 2.
  5325. - **updates** (Tensor) - The tensor to update the input tensor, has the same type as input,
  5326. and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].
  5327. Outputs:
  5328. Tensor, has the same shape and type as `input_x`.
  5329. Raises:
  5330. TypeError: If dtype of `indices` is neither int32 nor int64.
  5331. ValueError: If length of shape of `input_x` is less than the last dimension of shape of `indices`.
  5332. Supported Platforms:
  5333. ``GPU``
  5334. Examples:
  5335. >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
  5336. >>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
  5337. >>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
  5338. >>> # Next, demonstrate the approximate operation process of this operator:
  5339. >>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
  5340. >>> # 2, And input_x[0, 0] = -0.1
  5341. >>> # 3, So input_x[indices] = [-0.1, -0.1]
  5342. >>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
  5343. >>> op = ops.TensorScatterMin()
  5344. >>> # 5, Perform the min operation for the first time:
  5345. >>> # first_input_x = Min(input_x[0][0], updates[0]) = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
  5346. >>> # 6, Perform the min operation for the second time:
  5347. >>> # second_input_x = Min(input_x[0][0], updates[1]) = [[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
  5348. >>> output = op(input_x, indices, updates)
  5349. >>> print(output)
  5350. [[ -0.1 0.3 3.6]
  5351. [ 0.4 0.5 -3.2]]
  5352. """
  5353. @prim_attr_register
  5354. def __init__(self):
  5355. self.init_prim_io_names(inputs=['input_x', 'indices', 'updates'], outputs=['y'])
  5356. def infer_shape(self, input_x_shape, indices_shape, updates_shape):
  5357. if len(indices_shape) < 2:
  5358. raise ValueError(f"For '{self.name}', the dimension of 'indices' cannot be less than 2,"
  5359. f" but got {len(indices_shape)}.")
  5360. if indices_shape[-1] > len(input_x_shape):
  5361. raise ValueError(f"For '{self.name}', the last dimension of 'indices' must be less than or equal to "
  5362. f"the dimension of 'input_x', but got the "
  5363. f"last dimension of 'indices': {indices_shape[-1]} and the dimension of 'input_x': "
  5364. f"{len(input_x_shape)}.")
  5365. updates_shape_check = indices_shape[:-1] + input_x_shape[indices_shape[-1]:]
  5366. if updates_shape_check != updates_shape:
  5367. raise ValueError(f"For '{self.name}', the shape of 'update' must be equal to updates_shape_check, "
  5368. f"where updates_shape_check = indices_shape[:-1] + input_x_shape[indices_shape[-1]:] "
  5369. f"but got the shape of 'update': {updates_shape}, "
  5370. f"updates_shape_check: {updates_shape_check}, indices_shape: {indices_shape} and "
  5371. f"input_x_shape: {input_x_shape}. Please check input_x_shape and indices_shape.")
  5372. return input_x_shape
  5373. def infer_dtype(self, input_x_dtype, indices_dtype, updates_dtype):
  5374. validator.check_tensor_dtype_valid('indices', indices_dtype, [mstype.int32, mstype.int64], self.name)
  5375. args = {"input_x": input_x_dtype, "updates": updates_dtype}
  5376. valid_input_types = (mstype.float16, mstype.float32, mstype.int8, mstype.uint8, mstype.int32)
  5377. validator.check_tensors_dtypes_same_and_valid(args, valid_input_types, self.name)
  5378. return input_x_dtype
  5379. class TensorScatterSub(PrimitiveWithInfer):
  5380. """
  5381. Creates a new tensor by subtracting the values from the positions in `input_x` indicated by
  5382. `indices`, with values from `updates`. When multiple values are provided for the same
  5383. index, the result of the update will be to subtract these values respectively. This operation is almost
  5384. equivalent to using ScatterNdSub, except that the updates are applied on `Tensor` instead of `Parameter`.
  5385. The last axis of `indices` is the depth of each index vectors. For each index vector,
  5386. there must be a corresponding value in `updates`. The shape of `updates` should be
  5387. equal to the shape of `input_x[indices]`. For more details, see use cases.
  5388. Note:
  5389. If some values of the `indices` are out of bound, instead of raising an index error,
  5390. the corresponding `updates` will not be updated to `input_x`.
  5391. Inputs:
  5392. - **input_x** (Tensor) - The target tensor. The dimension of input_x must be no less than indices.shape[-1].
  5393. - **indices** (Tensor) - The index of input tensor whose data type is int32 or int64.
  5394. The rank must be at least 2.
  5395. - **updates** (Tensor) - The tensor to update the input tensor, has the same type as input,
  5396. and updates.shape should be equal to indices.shape[:-1] + input_x.shape[indices.shape[-1]:].
  5397. Outputs:
  5398. Tensor, has the same shape and type as `input_x`.
  5399. Raises:
  5400. TypeError: If dtype of `indices` is neither int32 nor int64.
  5401. ValueError: If length of shape of `input_x` is less than the last dimension of shape of `indices`.
  5402. Supported Platforms:
  5403. ``GPU``
  5404. Examples:
  5405. >>> input_x = Tensor(np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]), mindspore.float32)
  5406. >>> indices = Tensor(np.array([[0, 0], [0, 0]]), mindspore.int32)
  5407. >>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32)
  5408. >>> # Next, demonstrate the approximate operation process of this operator:
  5409. >>> # 1, indices[0] = [0, 0], indices[1] = [0, 0]
  5410. >>> # 2, And input_x[0, 0] = -0.1
  5411. >>> # 3, So input_x[indices] = [-0.1, -0.1]
  5412. >>> # 4, Satisfy the above formula: input_x[indices].shape=(2) == updates.shape=(2)
  5413. >>> op = ops.TensorScatterSub()
  5414. >>> # 5, Perform the subtract operation for the first time:
  5415. >>> # first_input_x = input_x[0][0] - updates[0] = [[-1.1, 0.3, 3.6], [0.4, 0.5, -3.2]]
  5416. >>> # 6, Perform the subtract operation for the second time:
  5417. >>> # second_input_x = input_x[0][0] - updates[1] = [[-3.3, 0.3, 3.6], [0.4, 0.5, -3.2]]
  5418. >>> output = op(input_x, indices, updates)
  5419. >>> print(output)
  5420. [[-3.3000002 0.3 3.6 ]
  5421. [ 0.4 0.5 -3.2 ]]
  5422. """
  5423. @prim_attr_register
  5424. def __init__(self):
  5425. self.init_prim_io_names(inputs=['input_x', 'indices', 'updates'], outputs=['y'])
  5426. def infer_shape(self, input_x_shape, indices_shape, updates_shape):
  5427. if len(indices_shape) < 2:
  5428. raise ValueError(f"For '{self.name}', the dimension of 'indices' cannot be less than 2,"
  5429. f" but got {len(indices_shape)}.")
  5430. if indices_shape[-1] > len(input_x_shape):
  5431. raise ValueError(f"For '{self.name}', the last dimension of 'indices' must be less than or equal to "
  5432. f"the dimension of 'input_x', but got the "
  5433. f"last dimension of 'indices': {indices_shape[-1]} and the dimension of 'input_x': "
  5434. f"{len(input_x_shape)}.")
  5435. updates_shape_check = indices_shape[:-1] + input_x_shape[indices_shape[-1]:]
  5436. if updates_shape_check != updates_shape:
  5437. raise ValueError(f"For '{self.name}', the shape of 'update' must be equal to updates_shape_check, "
  5438. f"where updates_shape_check = indices_shape[:-1] + input_x_shape[indices_shape[-1]:] "
  5439. f"but got the shape of 'update': {updates_shape}, "
  5440. f"updates_shape_check: {updates_shape_check}, indices_shape: {indices_shape} and "
  5441. f"input_x_shape: {input_x_shape}. Please check input_x_shape and indices_shape.")
  5442. return input_x_shape
  5443. def infer_dtype(self, input_x_dtype, indices_dtype, updates_dtype):
  5444. validator.check_tensor_dtype_valid('indices', indices_dtype, [mstype.int32, mstype.int64], self.name)
  5445. args = {"input_x": input_x_dtype, "updates": updates_dtype}
  5446. valid_input_types = (mstype.float16, mstype.float32, mstype.int8, mstype.uint8, mstype.int32)
  5447. validator.check_tensors_dtypes_same_and_valid(args, valid_input_types, self.name)
  5448. return input_x_dtype
  5449. class SplitV(Primitive):
  5450. r"""
  5451. Splits the input tensor into num_split tensors along the given dimension.
  5452. The `input_x` tensor will be split into sub-tensors with individual shapes given by `size_splits` along the split
  5453. dimension. This requires that `input_x.shape(split_dim)` is equal to the sum of `size_splits`.
  5454. The shape of `input_x` is :math:`(x_1, x_2, ..., x_M, ..., x_R)`. The rank of `input_x` is `R`. Set the given
  5455. `split_dim` as M, and :math:`-R \le M < R`. Set the given `num_split` as `N`, the given `size_splits` as
  5456. :math:`(x_{m_1}, x_{m_2}, ..., x_{m_N})`, :math:`x_M=\sum_{i=1}^Nx_{m_i}`. The output is a list of tensor objects,
  5457. for the :math:`i`-th tensor, it has the shape of :math:`(x_1, x_2, ..., x_{m_i}, ..., x_R)`. :math:`x_{m_i}` is the
  5458. :math:`M`-th dimension of the :math:`i`-th tensor. Then, the shape of the output tensor is
  5459. .. math::
  5460. ((x_1, x_2, ..., x_{m_1}, ..., x_R), (x_1, x_2, ..., x_{m_2}, ..., x_R), ...,
  5461. (x_1, x_2, ..., x_{m_N}, ..., x_R))
  5462. Args:
  5463. size_splits (Union[tuple, list]): The list containing the sizes of each output tensor along the split
  5464. dimension. Must sum to the dimension of value along `split_dim`.
  5465. Can contain one -1 indicating that dimension is to be inferred.
  5466. split_dim (int): The dimension along which to split. Must be in the range [-len(input_x.shape),
  5467. len(input_x.shape)).
  5468. num_split (int): The number of output tensors. Must be positive int.
  5469. Inputs:
  5470. - **input_x** (Tensor) - The shape of tensor is :math:`(x_1, x_2, ...,x_M ..., x_R)`.
  5471. Outputs:
  5472. Tensor, a list of `num_split` Tensor objects with the shape :math:`((x_1, x_2, ..., x_{m_1}, ..., x_R),
  5473. (x_1, x_2, ..., x_{m_2}, ..., x_R), ..., (x_1, x_2, ..., x_{m_N}, ..., x_R))`, :math:`x_M=\sum_{i=1}^Nx_{m_i}`.
  5474. The data type is the same with `input_x`.
  5475. Raises:
  5476. TypeError: If `input_x` is not a Tensor.
  5477. TypeError: If `size_splits` is not a tuple or a list.
  5478. TypeError: If element of `size_splits` is not an int.
  5479. TypeError: If `split_dim` or `num_split` is not an int.
  5480. ValueError: If rank of the `size_splits` is not equal to `num_split`.
  5481. ValueError: If sum of the `size_splits` is not equal to the dimension of value along `split_dim`.
  5482. ValueError: If `split_dim` is out of the range [-len(input_x.shape), len(input_x.shape)).
  5483. ValueError: If the `num_split` is less than or equal to 0.
  5484. Supported Platforms:
  5485. ``Ascend``
  5486. Examples:
  5487. >>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.int32)
  5488. >>> op = ops.SplitV(size_splits=[1, -1], split_dim=1, num_split=2)
  5489. >>> output = op(input_x)
  5490. >>> print(output)
  5491. (Tensor(shape=[3, 1], dtype=Int32, value=
  5492. [[1],
  5493. [4],
  5494. [7]]), Tensor(shape=[3, 2], dtype=Int32, value=
  5495. [[2, 3],
  5496. [5, 6],
  5497. [8, 9]]))
  5498. >>> input_x = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.int32)
  5499. >>> op = ops.SplitV(size_splits=[2, 1], split_dim=0, num_split=2)
  5500. >>> output = op(input_x)
  5501. >>> print(output)
  5502. (Tensor(shape=[2, 3], dtype=Int32, value=
  5503. [[1, 2, 3],
  5504. [4, 5, 6]]), Tensor(shape=[1, 3], dtype=Int32, value=
  5505. [[7, 8, 9]]))
  5506. """
  5507. @prim_attr_register
  5508. def __init__(self, size_splits, split_dim, num_split):
  5509. """Initialize SplitV"""
  5510. validator.check_value_type("size_splits", size_splits, [tuple, list], self.name)
  5511. for elements_of_size_splits in size_splits:
  5512. validator.check_value_type("elements of size_splits", elements_of_size_splits, [int], self.name)
  5513. if elements_of_size_splits != -1 and elements_of_size_splits < 1:
  5514. raise ValueError(f"For \'{self.name}\', all elements of size_splits must be positive (except at most "
  5515. f"one default value -1), but got: {elements_of_size_splits}.")
  5516. validator.check_value_type("split_dim", split_dim, [int], self.name)
  5517. validator.check_value_type("num_split", num_split, [int], self.name)
  5518. validator.check_positive_int(num_split, "num_split", self.name)
  5519. self.init_prim_io_names(inputs=['input_x'], outputs=['output'])
  5520. class ScatterElements(Primitive):
  5521. """
  5522. ScatterElements takes three inputs data, updates, and indices of the same rank r >= 1
  5523. and an optional attribute axis that identifies an axis of data (default is 0).
  5524. The output of the operation is produced by creating a copy of the input data, and then updating its value to
  5525. values specified by updates at specific index positions specified by indices.
  5526. Args:
  5527. axis (int): which axis to scatter, default is 0.
  5528. Inputs:
  5529. - **data** (Tensor) - The target tensor. c
  5530. - **indices** (Tensor) - The index of input tensor whose data type is int32 or int64.
  5531. - **update** (Tensor) - The tensor to update the input tensor, has the same type as input,
  5532. and update.shape should be equal to indices.shape.
  5533. Outputs:
  5534. Tensor, has the same shape and type as `data`.
  5535. Raises:
  5536. TypeError: If dtype of `indices` is neither int32 nor int64.
  5537. Supported Platforms:
  5538. ``Ascend``
  5539. Examples:
  5540. >>> op = ops.ScatterElements(0)
  5541. >>> data = Tensor(np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]), mindspore.float32)
  5542. >>> indices = Tensor(np.array([[1, 0, 2], [0, 2, 1]]), mindspore.int32)
  5543. >>> updates = Tensor(np.array([[0, 0, 0], [0, 0, 0]]), mindspore.float32)
  5544. >>> output = op(data, indices, updates)
  5545. >>> print(output)
  5546. [[ 0.0 0.0 3.0]
  5547. [ 0.0 5.0 0.0]
  5548. [ 7.0 0.0 0.0]]
  5549. >>> op = ops.ScatterElements(1)
  5550. >>> data = Tensor(np.array([[1, 2, 3, 4, 5]), mindspore.int32)
  5551. >>> indices = Tensor(np.array([[2, 4]), mindspore.int32)
  5552. >>> updates = Tensor(np.array([[8, 8]]), mindspore.int32)
  5553. >>> output = op(data, indices, updates)
  5554. >>> print(output)
  5555. [[ 1 2 8 4 8]]
  5556. """
  5557. @prim_attr_register
  5558. def __init__(self, axis=0):
  5559. """Initialize ScatterElements"""
  5560. validator.check_value_type("axis", axis, [int], self.name)
  5561. self.init_prim_io_names(inputs=['data', 'indices', 'updates'], outputs=['y'])
  5562. class ExtractVolumePatches(Primitive):
  5563. """
  5564. Extract patches from input and put them in the "depth" output dimension. 3D extension of extract_image_patches.
  5565. Args:
  5566. kernel_size (Union[int, tuple[int], list[int]]): A list of ints which's length is 3 or 5.
  5567. The size of the sliding window for each dimension of input. Must be: [1, 1, k_d, k_h, k_w] or
  5568. [k_d, k_h, k_w]. If k_d = k_h = k_w, you can enter an integer.
  5569. strides (Union[int, tuple[int], list[int]]): A list of ints which's length is 3 or 5.
  5570. How far the centers of two consecutive patches are in input. Must be: [1, 1, s_d, s_h, s_w] or
  5571. [s_d, s_h, s_w]. If s_d = s_h = s_w, you can enter an integer.
  5572. padding (str): A string from: "SAME", "VALID". The type of padding algorithm to use.
  5573. Inputs:
  5574. - **input_x** (Tensor) - A Tensor. Must be one of the following types: float16, float32.
  5575. 5-D Tensor with shape :math:`(x_n, x_c, x_d, x_h, x_w)`.
  5576. Outputs:
  5577. Tensor, has the same type as input.
  5578. If padding is VALID, the shape is :math:`(x_n, k_d * k_h * k_w * x_c, 1 + (x_d - k_d) / s_d,
  5579. 1 + (x_h - k_h) / s_h, 1 + (x_w - k_w) / s_w)`; if padding is SAME, the shape is :math:`(
  5580. x_n, k_d * k_h * k_w * x_c, (x_d + s_d - 1) / s_d, (x_h + s_h - 1) / s_h, (x_w + s_w - 1) / s_w)`.
  5581. Raises:
  5582. TypeError: If dtype of input_x is neither float16 nor float32.
  5583. TypeError: If kernel_size or strides is not a list, a tuple or a int.
  5584. TypeError: If input_x is not a tensor.
  5585. TypeError: If padding is not str.
  5586. ValueError: If the length of kernel_size is neither 3 nor 5 and kernel_size is not an integer.
  5587. ValueError: If the length of strides is neither 3 nor 5 and strides is not an integer.
  5588. ValueError: If padding is neither "VALID" nor "SAME".
  5589. ValueError: If elements of kernel_size or strides are not positive integer.
  5590. ValueError: If input_x is not a tensor in dimension 5.
  5591. ValueError: If input_x's shape has zero.
  5592. ValueError: If one of kernel_size or strides' first two numbers is not 1.
  5593. ValueError: If padding = "VALID" and input - kernel_size is less than 0 in d, h or w dimension.
  5594. ValueError: If padding = "SAME" and :math:`padding_needed = ((input_x + strides - 1) / strides - 1) *
  5595. strides + kernel_size - input` is less than 0 in d, h or w dimension.
  5596. ValueError: If x_h is not 1 or x_w is not 1 and x_w + padding_needed - k_w - s_w is less than 0.
  5597. ValueError: If x_d * x_h * x_w is greater than 2048.
  5598. Supported Platforms:
  5599. ``Ascend``
  5600. Example:
  5601. >>> kernel_size = (1, 1, 2, 2, 2)
  5602. >>> strides = (1, 1, 1, 1, 1)
  5603. >>> padding = "VALID"
  5604. >>> input_x = P.Reshape()(Tensor(np.arange(1, 28), mstype.float16), (1, 1, 3, 3, 3))
  5605. >>> output_y = P.ExtractVolumePatches(kernel_size, strides, padding)(input_x)
  5606. >>> print(output_y.shape)
  5607. (1, 8, 2, 2, 2)
  5608. """
  5609. @prim_attr_register
  5610. def __init__(self, kernel_size, strides, padding):
  5611. validator.check_value_type("kernel_size", kernel_size, (int, list, tuple), self.name)
  5612. validator.check_value_type("strides", strides, (int, list, tuple), self.name)
  5613. if isinstance(kernel_size, (list, tuple)):
  5614. kernel_size = tuple(kernel_size)
  5615. if len(kernel_size) == 5:
  5616. validator.check_int(kernel_size[0], 1, Rel.EQ, "kernel_size[0]", self.name)
  5617. validator.check_int(kernel_size[1], 1, Rel.EQ, "kernel_size[1]", self.name)
  5618. if isinstance(strides, (list, tuple)):
  5619. strides = tuple(strides)
  5620. if len(strides) == 5:
  5621. validator.check_int(strides[0], 1, Rel.EQ, "strides[0]", self.name)
  5622. validator.check_int(strides[1], 1, Rel.EQ, "strides[1]", self.name)
  5623. self.kernel_size = _check_3d_int_or_tuple("kernel_size", kernel_size, self.name,
  5624. allow_five=True, ret_five=True, greater_zero=True)
  5625. self.strides = _check_3d_int_or_tuple("strides", strides, self.name,
  5626. allow_five=True, ret_five=True, greater_zero=True)
  5627. self.add_prim_attr("kernel_size", self.kernel_size)
  5628. self.add_prim_attr("strides", self.strides)
  5629. validator.check_value_type("padding_dtype", padding, (str), self.name)
  5630. self.padding = validator.check_string(padding.upper(), ['VALID', 'SAME'], 'padding', self.name)
  5631. self.add_prim_attr("padding", self.padding)